url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/66812?sort=votes
## Ramanujan’s eccentric Integral formula ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The wikipedia page on Srinivasa Ramanujan gives a very strange formula: Ramanujan: If $0 < a < b + \frac{1}{2}$ then, $$\int\limits_{0}^{\infty} \frac{ 1 + x^{2}/(b+1)^{2}}{ 1 + x^{2}/a^{2}} \times \frac{ 1 + x^{2}/(b+2)^{2}}{ 1 + x^{2}/(a+1)^{2}} \times \cdots \ \textrm{dx} = \frac{\sqrt{\pi}}{2} \small{\frac{ \Gamma(a+\frac{1}{2}) \cdot \Gamma(b+1)\: \Gamma(b-a+\frac{1}{2})}{\Gamma(a) \cdot \Gamma(b+\frac{1}{2}) \cdot \Gamma(b-a+1)}}$$ • Question I would like to pose to this community is: What could be the Intuition behind discovering this formula. • Next, I see that Ramanujan has discovered a lot of formulas for expressing $\pi$ as series. May I know what is the advantage of having a same number expressed as a series in a different way. Is it useful at all? • From what I know Ramanujan basically worked on Infinite series, Continued fractions, $\cdots$ etc. I have never seen applications of continued fractions, in the real world. I would also like to know if continued fractions has any applications. Hope I haven't asked too many questions. As I was posting this question the last question on application of continued fractions popped up and I thought it would be a good idea to pose it here, instead of posing it as a new question. - 3 I really think you're asking three separate questions here. A split might not be such a bad idea. – Andrey Rekalo Jun 3 2011 at 10:42 3 I think you should read up on Ramanujan some more. There is really not much percentage in taking a formula like this in isolation and brainstorming some queries about it. – Charles Matthews Jun 3 2011 at 11:50 2 The formula you cite is incorrect; please double check--(b+1 in second term, instead of b+2) – S. Sra Jun 3 2011 at 12:14 1 You've asked about the intuition behind Ramanujan's formulas before. I doubt you'll get a better answer this time, but maybe someone will prove me wrong. – Gerry Myerson Jun 3 2011 at 12:23 4 Believe it or not, the integrand is a quotient of products of $\Gamma$-functions, integrated over a vertical line in the complex plane, and the integral pops out after a clever use of the Plancherel theorem for Mellin transforms. – David Hansen Jun 3 2011 at 14:26 show 3 more comments ## 3 Answers This is one of those precious cases when Ramanujan himself provided (a sketch of) a proof. The identity was published in his paper "Some definite integrals" (Mess. Math. 44 (1915), pp. 10-18) together with several related formulae. It might be instructive to look first at the simpler identity (i.e. the limiting case when $b\to\infty$; the identity mentioned in the original question can be obtained by a similar approach): $$\int\limits_{0}^{\infty} \prod_{k=0}^{\infty}\frac{1}{ 1 + x^{2}/(a+k)^{2}}dx = \frac{\sqrt{\pi}}{2} \frac{ \Gamma(a+\frac{1}{2})}{\Gamma(a)},\quad a>0.\qquad\qquad\qquad(1)$$ Ramanujan derives (1) by using a partial fraction decomposition of the product $\prod_{k=0}^{n}\frac{1}{ 1 + x^{2}/(a+k)^{2}}$, integrating term-wise, and passing to the limit $n\to\infty$. He also indicates that alternatively (1) is implied by the factorization $$\prod_{k=0}^{\infty}\left[1+\frac{x^2}{(a+k)^2}\right] = \frac{ [\Gamma(a)]^2}{\Gamma(a+ix)\Gamma(a-ix)},$$ which follows readily from Euler's product formula for the gamma function. Thus (1) is equivalent to the formula $$\int\limits_{0}^{\infty}\Gamma(a+ix)\Gamma(a-ix)dx=\frac{\sqrt{\pi}}{2} \Gamma(a)\Gamma\left(a+\frac{1}{2}\right).$$ There is a nice paper "Wallis-Ramanujan-Schur-Feynman" by Amdeberhan et al (American Mathematical Monthly 117 (2010), pp. 618-632) that discusses interesting combinatorial aspects of formula (1) and its generalizations. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Regarding your three questions: I have no idea what the intuition is behind Ramanujan's formula, but I hope someone else does, since I'd certainly like to know. Expressing the same number as a series in different ways is generally not useful per se, assuming you don't count being interesting or beautiful as being useful. Replacing a slowly converging series with a rapidly converging one can be useful if you want to compute a numerical approximation. If you get lucky, a new series may lead to a remarkable algorithm (such as the Bailey-Borwein-Plouffe algorithm for computing individual binary digits of $\pi$) or irrationality proof (such as Apery's proof of the irrationality of $\zeta(3)$), but you certainly can't count on that. As for applications of continued fractions in the real world, I don't know of many applications for continued fraction expansions of real numbers. Of course simple continued fractions are critical if you want good rational approximations, but not very many people do want them in practice. On the other hand, continued fraction expansions of functions have lots of real-world uses. The intuition is that people are rarely interested in the properties of specific numbers in applied problems, but they are more likely to be interested in specific functions. For example, the Berlekamp-Massey algorithm efficiently reconstructs a linear recurrence relation from a sequence satisfying it, and it can be reformulated in terms of computing the continued fraction expansion of a rational function. This is "linear feedback shift register synthesis", which comes up in many practical problems; among other things, it's used for decoding Reed-Solomon error-correcting codes, which are used on CDs, DVDs, and Blu-ray discs. It also comes up in some Krylov subspace algorithms for sparse linear algebra, such as the Wiedemann algorithm. Another practical application is computing in the Jacobian of a hyperelliptic curve $y^2 = f(x)$ over a field $K$ (which may not sound very practical, but it is if you want to do hyperelliptic curve cryptography). This amounts to finding a reduction algorithm for divisors on the curve, and it can be done very efficiently if you know how the continued fractions of functions in $K(x,\sqrt{f(x)})$ behave. A third case is computing Pade approximants of transcendental functions (this goes slightly beyond continued fractions, but is closely connected). Pade approximants give excellent rational function approximations to a given function, which are much more tractable on a computer than the original function was. - Here is a proof of Ramanujan's identity (thanks to Todd Trimble for encouraging me to post this!). As Andrey Rekalo notes, we have the identity $\prod_{k=0}^{\infty}(1+\frac{x^2}{(k+a)^2})=\frac{\Gamma(a)^2}{|\Gamma(a+ix)|^2}$. In particular, the integrand in Ramanujan's integral is $\frac{\Gamma(b+1)^2 |\Gamma(a+ix)|^2}{\Gamma(a)^2 |\Gamma(b+1+ix)|^2}$. Hence, after a little algebra (and also changing $b$ to $b-1$; I personally think Ramanujan made the wrong aesthetic choice here), we need to prove the integral evaluation $I=\int_{-\infty}^{\infty} \frac{|\Gamma(a+ix)|^2}{|\Gamma(b+ix)|^2}dx=\sqrt{\pi}\frac{\Gamma(a)\Gamma(a+1/2)\Gamma(b-a-1/2)}{\Gamma(b-1/2)\Gamma(b)\Gamma(b-a)}$. Now, if $f(x)$ has Mellin transform $F(s)$, then one form of Parseval's theorem for Mellin transforms is the identity $\int_{0}^{\infty}f(x)x^{-1}dx=\frac{1}{2\pi}\int_{-\infty}^{\infty}|F(it)|^2 dt$ (under suitable conditions of course). Applying this with the Mellin pair $f(x)=\Gamma(b-a)^{-1}x^{a}(1-x)^{b-a-1} \; \mathrm{if} \; 0\leq x \leq 1$ (and $f=0$ otherwise), $F(s)=\frac{\Gamma(s+a)}{\Gamma(s+b)}$ gives $I=2\pi \Gamma(b-a)^{-2} \int_{0}^{\infty}x^{2a-1}(1-x)^{2b-2a-2}dx$ $=2\pi \Gamma(b-a)^{-2} \frac{\Gamma(2a) \Gamma(2b-2a-1)}{\Gamma(2b-1)}$. Next, apply the formula $\Gamma(2z)=2^{2z-1}\pi^{-1/2}\Gamma(z)\Gamma(z+1/2)$ to each of the $\Gamma$-functions in the quotient here, getting $I=\sqrt{\pi} \frac{\Gamma(a)\Gamma(a+1/2)\Gamma(b-a-1/2)\Gamma(b-a)}{\Gamma(b-a)^2 \Gamma(b-1/2) \Gamma(b)}$, and cancelling a $\Gamma(b-a)$ concludes the proof. Exercise: Give a proof, along similar lines, of the formula $\int_{-\infty}^{\infty} |\Gamma(a+ix)\Gamma(b+ix)|^2 dx=\sqrt{\pi}\frac{\Gamma(a)\Gamma(a+1/2)\Gamma(b)\Gamma(b+1/2)\Gamma(a+b)}{\Gamma(a+b+1/2)}$, and determine for what range of $a,b$ it holds. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402946829795837, "perplexity_flag": "head"}
http://mathoverflow.net/questions/34110/algebraic-geometry-examples/34179
## Algebraic geometry examples ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What are some surprising or memorable examples in algebraic geometry, suitable for a course I'll be teaching on chapters 1-2 of Hartshorne (varieties, introductory schemes)? I'd prefer examples that are unusual or nonstandard, as I already know many of the standard ones (27 lines on a cubic surface, etc). To illustrate the sorts of things I am looking for, here are some examples that would have been useful answers if I had not already thought of them: The proof of the Kakeya conjecture for finite fields. Free sheaves need not be projective The Hilbert scheme of m points on an n dimensional variety can have dimension larger than mn. The scheme of nilpotent matrices is not reduced. The 1-line proof of Pascal's theorem from Bezout's theorem. Spec Z[x] Resolution of the Whitney umbrella The related threads http://mathoverflow.net/questions/28496/what-should-be-learned-in-a-first-serious-schemes-course and http://mathoverflow.net/questions/20112/interesting-results-in-algebraic-geometry-accessible-to-3rd-year-undergraduates also have some good examples, such as Grassmannians. Added later: Thanks for all the examples; I'm embarrassed to admit that some of them are counterexamples to statements I would have guessed were true. Over the next few weeks I will gradually add answers below (with credit to those who suggested them) to my draft course notes (These notes still need a lot of corrections/expansion/rewriting; they should have reached a more stable state by Dec 2010) - 13 Maybe this should be community wiki? – Charles Siegel Aug 1 2010 at 14:57 11 A polynomial in several variables over $\mathbf{Z}$ which is irred. over $\overline{\mathbf{Q}}$ is irred. over $\overline{\mathbf{F}}_p$ for all but finitely many $p$, whereas $X^2 - 2$ is irred. over $\mathbf{Q}$ but reducible mod $p$ for "half" of all $p$. This helps to show that what usually varies well in families (in the sense of cutting out an open, or at least constructible, locus in the base) are properties of geometric fibers rather than of actual fibers. The above hypersurface example is an instructive HW exercise after they learn the Nullstellensatz over a general field. – BCnrd Aug 1 2010 at 15:23 8 After introducing gp varieties (or gp schemes), on HW have them use Yoneda to prove a map which respects mult. must respect id. and inv., and then ask them to prove it "by hand" using Hopf alg's. After univ. property of proj. space, define ${\rm{PGL}}_n$ and prove it's a gp scheme and ask them to describe pts valued in any $A$, show no surprise for local $A$, but for Dedekind $A$ with a non-principal ideal $I$ that's 2-torsion in class group (so $I \oplus I \simeq A^2$!), ${\rm{PGL}}_2(A)$ is "bigger" than expected. Ask them to make analogue in difft'l geometry, so not a quirk of schemes. – BCnrd Aug 1 2010 at 15:39 7 Are there any other working algebraic / arithmetic geometers / number theorists out there wishing they could take Richard's "introductory" course? If he talks about half of these things, I would gladly finance a student to take and post lecture notes. – Pete L. Clark Aug 1 2010 at 15:49 6 I once found the following exercise "illustrative": "What are the points of affine 1-space over C? (i.e. what are the points of Spec(C[X]). What are the points of affine 1-space over Q? (i.e. Spec(Q[X]). Describe the obvious map from Spec(C[X]) to Spec(Q[X])." This did my head in a bit: I had always imagined that Spec(Q[X]) was just "probably Q plus a generic point, or something" until I did this exercise. – Kevin Buzzard Aug 1 2010 at 18:21 show 10 more comments ## 20 Answers The symmetric square of a genus $2$ curve is a blow up of a 2-torus in one point (the canoncal divisor in the Jacobian). Nice example for Hilbert schemes. - I think that this example (which is essentially the explicit calculation of the Jacobian of a genus 2 curve) is a great one. – Emerton Aug 2 2010 at 2:42 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The image of an affine variety is not necessarily affine; even better: one can have a bundle with affine fibers over a projective variety such that the total space is affine: $GL_2(\mathbb C) \to \mathbb P^1(\mathbb C)$ - 3 This also reminds me to include the example of the map (x,y) -> (x,xy) with non-affine image. – Richard Borcherds Aug 2 2010 at 4:10 2 Btw, I have seen this mistake in public (assuming that the image of an affine variety is affine) by a prominent mathematician. It seems to be a statement that becomes less and less natural the further you advance in algebraic geometry (unless of course you make your living by thinking about $G/P$s). – ABayer Aug 2 2010 at 11:32 8 There are also counterexamples even for curves: the map taking x to (x^2: (x-1)^2) maps the affine line onto the projective line. – Richard Borcherds Aug 2 2010 at 17:05 As someone who (almost) makes living thinking about $G/P$s, let me comment that subgroups $H$ of a semisimple algebraic group such that $G/H$ is affine (or even quasi-affine) are very rare. – Victor Protsak Aug 2 2010 at 21:15 Just to expand a bit on Victor's theme: I think it's a theorem of Matsushima/Richardson/Haboush that if $G$ is reductive then $G/H$ is affine if and only if $H$ is reductive. So I imagine he's pointing out that reductive subgroups are (in some sense) "sparse"? – Thomas Nevins Aug 9 2010 at 20:36 One of my favourite sets of examples, stolen from Miles Reid, is the determination of rings $R=\oplus_n H^0(X, nD)$ for ample divisors $D$ on projective varieties $X$. A nice sequence, where a lot of the general features of the theory already show up, is to take $X=E$ an elliptic curve, and $D=nP$ for $P$ a point on $E$. $n=1$: generators in degrees $1,2,3$, with a relation in degree 6 (by Riemann-Roch), leading to $E\subset P^2[1,2,3]$ (weighted projective space) a sextic hypersurface given by Weierstrass equation $z^2=y^3 + ax^3 y + bx^6$. $n=2$: get $E\subset P^2[1,1,2]$, a double cover of $P^1$; $P\in E$ is one of the ramification points. $n=3$: get $E\subset P^2$, a general cubic; $P\in E$ is an inflection point on the image. $n=4$: get $E\subset P^3$, a general complete intersection of bidegree $(2,2)$. $n=5$: get $E\subset P^4$, a non-complete intersection variety, equations are the $4\times 4$ Pfaffians of a general $5\times 5$ skew-symmetric matrix of linear forms, equivalently a linear section of $Gr(2,5)$ in its Plucker embedding. ... - I would love to have done examples like this when I learned algebraic geometry. – Ryan Reich Aug 2 2010 at 23:51 This answer and mine are in much the same spirit. See also Section 2 of my (old, but unfinished) manuscript: math.uga.edu/~pete/biconic.pdf. I heartily agree that these examples should be more prominent in introductory courses. – Pete L. Clark Aug 11 2010 at 23:53 Perhaps you mean the 4x4 Pfaffians for $n=5$?. – J.C. Ottem Sep 11 2010 at 19:17 Yes, indeed! Thanks. – Balazs Sep 17 2010 at 20:59 1 This kind of statement is folklore knowledge in some epsilon neighbourhood of Miles Reid; I don't know a reference for this particular statement. But in one direction, a linear section of $Gr(2,5)$ is clearly an elliptic curve of degree $5$. In the other direction, by Riemann-Roch you get an embedding of $E$ into $P^4$, which is projectively Gorenstein; by Eisenbud-Buchsbaum, its equations are the Pfaffians of a skew matrix. Googling, I also found Tom Fisher: Pfaffian representations of elliptic normal curves, in Trans AMS 263 (2010), which does a lot more of this sort of thing. – Balazs Sep 20 2010 at 14:28 show 1 more comment A great difference in the transition from varieties to schemes is the presence of non-reducedness. Sometimes on a given scheme there are different natural scheme structures and with respect to one, the scheme is non-reduced, while with respect to the other one it is reduced. An example is the scheme of nilpotent $n \times n$ matrices for $n \geq 2$. One way to give a scheme structure to the set of nilpotent matrices is to observe that an $n \times n$ matrix $X$ is nilpotent if and only if $X^n$ is the zero matrix. Thus all the entries of $X^n$ are polynomials (homogeneous of degree $n$) in the entries of $X$, whose vanishing set is the set of nilpotent matrices. As has been stated in the question and noted in a comment by Victor, the trace of a nilpotent matrix is a function that is identically zero on the set of nilpotent matrices, but it is not contained in the ideal just described, since the ideal is generated by homogeneous polynomials of degree $n$. There is though a different way of making the set of nilpotent matrices into a scheme, namely, by realizing that a matrix $X$ is nilpotent if and only if all the eigenvalues of $X$ vanish, and hence if and only if the characteristic polynomial of $X$ is the polynomial $\lambda ^n$. The coefficients of the characteristic polynomial of the matrix $X$ (except the leading coefficient) are therefore polynomials in the entries of the matrix $X$ whose vanishing describes again the locus of nilpotent matrices. This time, though, the polynomials generating the ideal are homogeneous of degrees $1,2,\ldots,n$. It is not difficult to argue that in this case the ideal generated by these polynomials is indeed radical, so that the scheme thus defined is reduced. - 2 This is a good example, and is one of those I listed in the question. It has another property which surprised me: by the nullstellensatz, some power of the trace is in the ideal of the nilpotent matrices. I first guessed that the smallest such power is the n'th power, but this is wrong: in general one needs a higher power. I don't know what the smallest such power is for n>2. – Richard Borcherds Aug 2 2010 at 16:24 While I am not sure of this, I suspect that the regularity of the ideal generated by the entries of X^n might be 2^n-1, so that the (2^n-1)-st power of the trace should be in the ideal. As I said, I am not sure of this and could only verify the assertion for n<4. – damiano Aug 2 2010 at 20:40 Here is smth to keep in mind: finding a candidate finite system of defining equations for the closures of conjugacy classes of matrices is rather easy, but proving the reducedness is surprisingly difficult. This was accomplished only recently, by Jerzy Weyman in 1989 (char=0: essential; matrices assumed nilpotent: can be removed), and the full answer for other classical groups may still be missing. Part of the difficulty is that unlike the nullcone (Kostant, 1961) these varieties are not complete intersections in general, and nontrivial homological algebra and rep theory is used in the proof. – Victor Protsak Aug 2 2010 at 21:33 Using a bit of representation theory, in char=0 Richard's question about the smallest power reduces to the following: find the smallest power $m$ of $p_1$ that belongs to the ideal generated by $p_{n+i}, 0\leq i\leq n-1$ in the ring of symmetric functions in $n$ variables, where $p_k$ is the $k$th power sum. It's clear that $m\geq 2n-1$ and it seems plausible that the answer can be extracted from Newton's formulas. I'd be surprised if this weren't already addressed in books on computational commutative algebra. – Victor Protsak Aug 3 2010 at 6:14 The gluing along closed subscheme examples are a nice exercise for playing with Spec. That is, computing the spec of a pushout of affine schemes like $$(X \leftarrow Z \rightarrow Y)$$ where one of the arrows is a closed immersion (or both are, for simplicity). It's a decent exercise to show that the Spec of such a pushout is what you expect pointwise. Some examples that are worth trying as exercises are: [1.] take a copy of $\mathbb{A}^1$ for $X$, a pair of points for $Z$ and a single point for $Y$. You get a nodal singularity. Alternately, glue two copies of $\mathbb{A}^1$ together at the origin. [2.] $X = Spec k[x]$ as above, $Z = Spec k[x]/x^2$, $Y = k$, producing a cusp. [3.] If $X = \mathbb{A}^2$, $Z$ is an axis and the map $Z \to Y$ is the usual 2-to-1 cover, then you get a pinch point. [4.] You can make the map $Z \to Y$ non-finite and produce non-noetherian affine schemes too. For example, $X = \mathbb{A}^2$, $Z$ one of the axes (or any curve), and $Y$ a point. You can later point out why you can't always do this for general (non-affine) schemes. That is, $X = \mathbb{P}^2$, $Z$ an elliptic curve, and look at doing that sort of pinch point construction for various maps $Z \to \mathbb{P}^1$. This can lead to things like algebraic spaces if you are so inclined. Another direction you can go with this sort of stuff is normality (ie, what is the geometric meaning of normality, all the examples you just computed with gluing are non-normal). - I'm used to gluing open subschemes, but I hadnt come across this idea of gluing closed ones in such weird ways before. Example 4 is a way of looking at k[x,xy,xy^2,...], a non-noetherian subring of a Noetherian ring, that had not occurred to me. I guess this example explains why gluing closed subschemes is uncommon. – Richard Borcherds Aug 2 2010 at 22:47 2 @Karl Schwede: I like your article on these gluing constructions! (www-personal.umich.edu/~kschwede/…) – Martin Brandenburg Aug 2 2010 at 23:34 Let me just write three among my favourite examples (over $\mathbb{C}$). 1) The point of type $E_8$ given by $x^2+y^3+z^5=0$ is the only isolated, $2$-dimensional, $factorial$ singularity. 2) The Klein plane quartic $C: x^3y+y^3z+z^3x=0$ is the only genus $3$ curve having automorphism group of maximal order. In fact $\textrm{Aut}(C)=PSL_2(\mathbb{F}_7)$, whose order is $168$. 3) There exist exactly $one$ plane quartic curve with three ordinary cusps (up to projective transformations). It is obtained by taking the image of a conic $C$, which is inscribed in a triangle $T$, via the standard Cremona transformation centered at the vertices of $T$. Well, maybe they are standard after all, but I like them... - I am very fond of 2). By reading through Noam Elkies' beautiful article on the Klein quartic, one can acquire a large supply of very nice examples to present to a class, touching on many different topics (representation theory, invariant theory, modular and Shimura curves, differentials,...) – Pete L. Clark Aug 1 2010 at 15:36 You meant $PSL(2,7) = PSL_2(\mathbb F_7) (\cong PSL_3(\mathbb F_2))$... – fherzig Aug 1 2010 at 18:33 Oops...Fixed, thank you! – Francesco Polizzi Aug 1 2010 at 18:51 The E8 singularity appears in Hartshorne as exercise V.5.8 where he gives a proof that it is factorial. Exercise 20.17 of Eisenbud's book shows more generally that if R is a 3-dimensional regular local ring then R/f is a UFD if and only if f is not the determinant with coefs in the max ideal. Klein's quartic appears in Hartshorne as exercise IV.5.7 but I agree that it deserves more attention. There seems to be no easy proof of its aut group: all arguments I can think of either use heavy calculations or a fair amount of machinery from representation theory. – Richard Borcherds Aug 2 2010 at 16:51 I know the following example from Kontsevich. Not sure it is suitable to your course, but I like it. Consider a space of closed 6-edges polygonal lines in Euclidean ${\mathbb R}^3$ such that each edge has length $1$ and each angle between subsequent edges is the right angle. The problem is to count the virtual and real dimension of that space (modulo the action of Euclidean group). Surprisingly, the quotient space has isolated points and component of the dimension $1$. - 2 It's such a neat example I'll fit it in somehow. I couldn't figure out what was going on algebraically, and had to resort to making a K'nex model. If I've understood it correctly, the edges and angles dont have to be equal, provided all opposite edges and angles are equal (of fixed size). – Richard Borcherds Aug 2 2010 at 3:48 I first saw this with a physical model, and I'd be very interested to hear a mathematical explanation. – Peter Samuelson Sep 11 2010 at 16:39 One example which is (at least over an algebraically closed field) very classical, but contemporary geometers and number theorists do not seem to be as intimately familiar with is the geometry of curves of genus one embedded in $\mathbb{P}^3$ as the complete intersection of two quadrics: $C: Q_1(x,y,z,w) = Q_2(x,y,z,w) = 0$. Here are two nice facts (I work over a ground field of characteristic not $2$ or $3$ but otherwise arbitrary): 1) Such embeddings of $C_{/k}$ correspond to degree $4$ rational divisors on $C$. 2) $C$ admits a rational divisor class of degree $2$ iff it admits a $2:1$ map onto a conic curve $X$ (which need not be $\mathbb{P}^1$) iff the pencil determined by $Q_1$ and $Q_2$ admits a $k$-rational conical quadric, i.e., a quadric with a variable omitted. (The class of the conic $X$ in the Brauer group of $k$ is the obstruction to the rational divisor class containing a $k$-rational divisor.) If these conditions are satisfied, then the remaining $3$ conical quadrics in the pencil are isomorphic, as a set with Galois action, to the points of order $2$ on the Jacobian elliptic curve. And a problem: 3) One quadric hypersurface over $k$ can of course be diagonalized. Characterize the set of elliptic curves over $\overline{k}$ which may be given as the complete intersection of two diagonal quadrics in $\mathbb{P}^3$. This can be solved both by pure thought and by calculation. - @Pete I really like these examples. Can you give a reference? – JME Mar 27 2011 at 18:59 1 @JME: the document which I have most readily at hand is this one: math.uga.edu/~pete/biconic.pdf – Pete L. Clark Mar 28 2011 at 10:00 I'd prefer examples that are unusual or nonstandard [...] Ok then I finally have a chance to present some of my interests ;-). Many people claim that schemes are only hausdorff in trivial cases. This is wrong. Namely, $Spec(A)$ is hausdorff if and only if $A$ is $0$-dimensional. More generally, a scheme $X$ is hausdorff if and only if it is $0$-dimensional and separated (in the scheme sense), and here you can replace $0$-dimensional by $T_1$ and separated by quasi-separated, if you wish. This shows that here the analogies of these notions in scheme theory and general topology become actually equivalences. Every compact hausdorff scheme is already affine (nice exercise, using just general topology and $Spec(R \times S) = Spec(R) \coprod Spec(S)$). Every locally compact totally disconnected hausdorff space can be made into a scheme; take the constant sheaf $\underline{\mathbb{Z}/2}$. Hausdorff schemes or more general sheaves on Stone spaces also play a role in the classification of certain algebraic rings. For example, every countable ring in which the elements satisfy $a^4=a$ is isomorpic to $\{f \in C(X,\mathbb{F}_4) : f(E) \subseteq \mathbb{F}_2\}$ for some closed subset $E$ of a Stone space $X$. Now here is another example: In topology, the linearity of vector bundle maps is declared fiberwise (instead of working with vector space objects and declaring linearity in a functorial way). This is wrong in algebraic geometry: A $S$-endomorphism of the affine $n$-space $\mathbb{A}^n_S$, which is linear on every fiber $\mathbb{A}^n_{\kappa(s)}$, does not have to be linear. It surprised me when I heard that infinite product is not an exact functor on the category of sheaves: If $A_i \to B_i$ are surjective, then $\prod_i A_i \to \prod_i B_i$ does not have to be surjective. The reason should be that the canonical map $(\prod_i A_i)_x \to \prod_i ({A_i}_x)$ is not an isomorphism. But I don't know of any explicit example for the $A_i \to B_i$. Hints? - Thank you for this answer, it was very enlightening! – B. Bischof Aug 1 2010 at 16:26 1 I think an explicit example for infinite products of sheaves of sets not being exact can be given as follows. Represent sheaves by their etale spaces over (say) the unit interval I, and take B_i to be the etale space I over I. Take A_i to be the disjoint union of the sets of an open cover of I whose sets have diameter less than 1/i. Then each map from A_i to B_i is surjective, but the sheaf product of all the A_i is empty, as no nonempty etale space can map to all the A_i. Seems related to failure of axiom of choice in a topos. – Richard Borcherds Aug 2 2010 at 23:08 An example of a non-linear fiberwise-linear $S$-endomorphism of $\mathbb{A}^n_S$ ? – Qfwfq Mar 28 2011 at 19:03 @unknowngoogle: Think of nilpotent polynomials. – Martin Brandenburg Mar 28 2011 at 19:08 Ah yes, of course. For some reason I was looking in the direction "rings that do not contain a field" instead of assuming that an answer in the direction "nilpotents" was allowed. – Qfwfq Mar 29 2011 at 14:29 The nonprojective surface X which is flat over a curve C consisting of two projective lines attached at two nodes, where the fibers over the smooth points of C are chains of two lines, and the fibers over the nodes are chains of three lines. Considering the degrees of line bundles restricted to the components of the fibers shows that X cannot admit an ample line bundle. - 1 This gives a singular non-projective complete surface. If any wants a nonsingular example, Hartshorne has a couple of examples of nonsingular nonprojective complete 3-folds in appendix B. – Richard Borcherds Aug 2 2010 at 22:18 Ravi has a variant in his notes where the general fiber is one line and the fiber over the nodes is two lines. It's over the same base C but not flat: math.stanford.edu/~vakil/0506-216/… – Michael Thaddeus Aug 2 2010 at 23:01 I am not sure whether these examples can be described as nonstandard, but they are among my favorites. Ex 5 may be too advanced for your course. 1. Proof of Poncelet's porism using correspondences and the group law on an elliptic curve. 2. There are 24 inflexion lines and 28 bitangent lines to a smooth plane quartic (Clemens, Scrapbook of complex curve theory). 3. Configuration of lines on a smooth del Pezzo surface of degree $d$ obtained by blowing up $n=9-d$ points in $\mathbb{P}^2$ is described via the root system of type $E_n$ (Manin, Cubic forms). 4. Configuration of exceptional curves on the resolution of a Klein-du Val singularity is given by a Dynkin graph (Shafarevich, Basic algebraic geometry, vol 1). 5. How many conics in $\mathbb{P}^2$ touch 5 conics in general position? (Fulton, Intersection theory) - 1. If you discuss the Whitney umbrella, I guess it is to show that blowing-up the singular point does not resolve the singularity whereas blowing-up the double line does. Another interesting properties of blow-up can be discussed by considering the 3 different resolutions of the conifold $x_1 x_2- x_3 x_4 =0$ in $\mathbb{C}^4$, namely the two small resolutions related by a flop and the blow-up of the isolated singularity with exceptional locus $\mathbb{P}^1\times\mathbb{P}^1$. 2. A quadric surface $Q$ in $\mathbb{P}^3$ is isomorphic to $\mathbb{P}^1\times \mathbb{P}^1$ via the Segre embedding. However, the Zariski topology of $Q$ is not homeomorphic to the product topology of $\mathbb{P}^1\times \mathbb{P}^1$ when each $\mathbb{P}^1$ is considered with the Zariski topology. 3. The orbifolds $\mathbb{C}^2/ \Gamma$ where $\Gamma$ is a discrete group of $SU(2)$. These orbifolds can be expressed as the simple isolated singularities of a surface and their resolution gives all the ADE Dynkin diagrams. - For 1, isn't it blowing up the line that resolves the singularity? Blowing up the origin doesn't (or have I completely forgotten how this works). – Karl Schwede Aug 2 2010 at 15:34 Yes, it is the blow-up of the line that revolves it although the point is a worse singularity. Corrected, thanks! – JME Aug 2 2010 at 16:02 2 Thanks; these are all good examples. In fact, so good that they were already in my notes... – Richard Borcherds Aug 2 2010 at 22:12 I would suggest prove the Hamilton Cayley theorem in one sentence using algebraic geometry: The theorem is true for diagonalizable matrices, which forms a Zariski dense set. (although it is not open, but it contains the open subset of matrices with distinct eigenvalues. Also, the irreducibility of $\mathbb{A}^n$ is assumed.) This proof can be compared with the "physicists'" proof of this theorem (over $\mathbb{C}$): If $A$ is a square matrix, then it is diagonalizable after perturbation, i.e. $A+\epsilon H$ is diagonalizable. And let $\epsilon\rightarrow 0$. Finally, as an example of the somewhat mysterious reduction steps in EGA, we can also restate and reprove it for integral domains by base change to the algebraic closure of the function field. - The affine line $\mathbb{A}_{\Bbbk}^1$ is not "simply connected" (i.e. has nontrivial connected étale covers) if $\Bbbk$ has positive characteristic: take $x\mapsto x^p+x$. - As Hurewicz would suggest, its $H^1$ is also not zero; any $x^{p-1} f(x^p)\ dx$ is a closed 1-form, but only exact if $f=0$. – Allen Knutson Aug 2 2011 at 17:20 For the more arithmetically minded people, an illustration of the Weil conjectures in the simple cases of the projective space by direct checking and also proving the Hasse-Weil theorem for elliptic curves could be very instructive. Again, for the more arithmetically minded people who are also open to some speculation, one could use counting of the number of points on the projective space over $\mathbb F_q$ and use the observation of Tits to introduce the Field with one element. The Mordell-Weil theorem and Faltings theorem could be mentioned(without proof, of course) and compared to the case of genus 0, ie lines, to show that the geometry of a curve affects its arithmetical behavior significantly. - A small but illuminating exemple : isolated singularities consisting of affine $k$ lines meeting at the origin in $\mathbb{A}^n$. One can show easily that the analytic type of the singularity depends on whether lines are coplanar by computing Zariski tangent cone, that there is a 1-dimensional moduli of analytical types for 4 lines in the plane (cross-ratio) or that seminormalization of such a singularity always gives the maximally non-coplanar case. When you introduce flat families, you can also look at flat limits of families of lines in this way, and explain where the "missing" tangent vector goes. - It seems useful to give a counterexample to a commonly misunderstood version of upper semi continuity, namely the one stated erroneously in Shafarevich's BAG chapter I, section 6.3, corollary to theorem 7. the correct version assumes the map is proper or is stated locally on the source instead of the target, as in Mumford's red book. E.g. take source and target both = C^3, complex 3 space, and map (x,y,z)-->(x^2,y,z). The subset W = {(1,y,0)} of the target has reducible preimage, namely two isomorphic copies W1, W2 of W. Now blow up the source along one copy W1, and then remove the exceptional curve over one point of W1. Now the composition of the blow down and the original map, has one dimensional fibers over a non closed subset of the target. As a remark on the example of the blowup at one point of the second symmetric product of a genus 2 curve over the complexes, this example shows that the second symmetric product of an algebraic curve can have non algebraic deformations, since that is true for the 2 dimensional compact complex torus. This remark has occurred several times since the paper by Xavier Gomez Mont in about 1979. The lines on a cubic surface are classical, and the conic bundle structure on that surface any one of them gives rise to is also useful. The book by Semple and Roth is laden with classical examples. Perhaps the first occurrence of the use of 4 space to draw conclusions about varieties in 3 space is the paper by Corrado Segre where he studies the quartic surface with a double line in P^3 by projecting to it from an intersection of two quadrics in P^4. This is of course another one in the sequence of del Pezzo surfaces of which the cubic surface is merely the most famous. The example of Schubert's method of degeneration to count the number of lines in P^3 that meet all of 4 general lines is illustrative. One can also combine it with the Plucker embedding of the set of all lines in P^3, computing the hyperplane section of that embedding, and reduce to the fact that this variety is a quadric in P^5. Joe Harris' book is also loaded with nice examples, including "Fano" parameter spaces and their tangent spaces. - Nonreduced schemes are a good source of bafflement and wonder. For example: The fiber product of reduced schemes may be nonreduced. The kernel of the $p$'th power map on $\mathbb{G}_m$ in characteristic $p$ is a fast example. This leads to what would seem to me to be an even stranger fact: The quotient of a reduced scheme by the action of a nonreduced group scheme can be reduced. Also, the quotient of that same scheme by the action of the reduced subscheme of that same group can be reduced again. Surely they are not both reduced. - 3 Ryan, for your second example I think the real subtlety for a beginner is to get their head around a good definition of quotient which puts more emphasis on functorial aspects than on the underlying space. Anyway, a more surprising phenomenon related to your suggestions is that the underlying reduced scheme of a finite type group scheme over a field need not be a subgroup scheme. Also, the fact that a ring with a regular faithfully flat extension is necessarily regular seems like a miracle. Maybe you wanted to say "smooth" instead of "reduced" in your 2nd example? – BCnrd Aug 1 2010 at 15:16 1 Concerning your first point, you can (and should?) first explain it purely algebraically: if $l/k$ is an inseparable finite degree field extension, then $l \otimes_k l$ is nonreduced. Speaking from my own experience, characteristic $p$ geometry seems much less pathological and suprising when I have some familiarity with characteristic $p$ algebra. Here you can bring back geometry by pointing to the nonreducedness of the fiber product as a sort of "I told you so" justification of the non etaleness of Spec l over Spec k. – Pete L. Clark Aug 1 2010 at 15:45 19 "The fibre product of (blah) schemes may not be (blah)". This isn't at all surprising---it's properties of morphisms which one would expect to inherit, not properties of schemes. The map from Spec(C) to Spec(Q-bar) is awful, but both schemes are almost maximally non-pathological (in the sense that they'll have almost any reasonable property that schemes might have). Try forming the pullback of two maps Spec(C)->Spec(Q-bar) to get some horrible non-Noetherian mess: this is happening because the maps are bad, not because the schemes are bad. – Kevin Buzzard Aug 1 2010 at 18:18 All the more reason to give this and many other examples of fiber products producing pathological scheme-theoretical properties! The point you make is one of the most important in algebraic geometry. – Ryan Reich Aug 1 2010 at 19:16 While is may be obvious to experts that fiber products need not preserve (blah), I agree that it is worth pointing this out to beginners. I'm missing something in the second example ("The quotient of a reduced scheme by the action of a nonreduced group scheme can be reduced."); the quotient of a reduced point by anything seems to be just a reduced point, which does not seem all that interesting. – Richard Borcherds Aug 2 2010 at 16:31 show 1 more comment There is a nice geometric example by Zariski to show why going down theorem (http://en.wikipedia.org/wiki/Going_up_and_going_down) fails when the bottom ring is not normal. Take a cylinder over a node and call its coordinate ring $A$. Let $B$ be the normalization of $A$. Let $P$ be a singular point of Spec$(A)$ and $Q_1,Q_2$ be points of Spec$(B)$ lying above $P$. Take an irreducible curve $C$ in Spec$(B)$ passing through $Q_1$ but avoiding $Q_2$. Then there no sequence in $A$ corresponding to the inclusion of prime ideals $I(C) \subset I(Q_1)$. - Tsen's theorem: the function field $K$ of a curve $C$ over an algebraically closed field $k$ is $C_1$ (hypersurfaces of degree $d$ in $\mathbb P^n$ have $K$-points when $d\le n$). Hartshorne uses this to show that ruled surfaces have sections, but his reference is to Tsen's paper of $1936$(?) which is written in terms of Galois cohomology and conceals the geometric significance (from me, anyway). It's proved, very concretely, in SGA $4\ 1/2$ (Deligne, Arcata), so not only would the class get a theorem, it would get contact with other bits of algebraic geometry. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 221, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328856468200684, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Blue_Wash&redirect=no
# Blue Wash ### From Math Images Blue Wash Field: Fractals Image Created By: Paul Cockshott Website: Fractal Art Blue Wash This image is a random fractal that is created by continually dividing a rectangle into two parts and adjusting the brightness of each resulting part. # Basic Description This image is considered a random fractal, because the behavior and coloring of this image is determined randomly. The basic rule governing a fractal of this kind: 1. Take a rectangle 2. Divide the rectangle into two sub-rectangles at a random location 3. Adjust the brightnesses of each sub-rectangles by a random factor 4. Repeat for all current sub-rectangles (now considered "rectangles") in the image ### Two Methods Inclined Brightness Method The artist actually experimented with at least two methods to create images similar to the random fractal above that differed on how the brightness was adjusted in each split rectangle. • The first (Basic) method involves simply increasing the brightness of one sub-rectangle and decreasing the brightness of the other sub-rectangle of a split rectangle by a random variable. This method was used to create the fractal featured at the top of this page. • The second (Inclined) method involves adjusting the brightness of the two sub-rectangles gradually to make an inclined plane in brightness within each sub-rectangle. The brightness of the line where the rectangle is split is determined randomly, and the two resulting sub-rectangles then each form an increasing or decreasing plane in brightness to meet the brightness of the dividing line. If you compare the featured image at the top of the page (created by the first method) to the image on the right (created by this second method), you can see that this method results in a more "painterly" effect. An interesting observation about the creation of these fractals is that with each iteration, the change in brightness and spatial aspects (splitting the rectangles) of the fractals become smaller and smaller and their influence on the images gradually diminishes. Thus, after a number of iterations, the fractals barely change in appearance. Please take a look at the applets below to test this observation. [Click to show applet] [Click to hide applet] If you can see this message, you do not have the Java software required to view the applet. ### Origins of this Fractal The artist who created this fractal, Paul Cockshott, is a computer scientist and a reader at the University of Glasgow. The various math images featured on this page emerged from his scientific research in stereo range finding, which is a method used to estimate distance and depth from two images (one from the right perspective and one from the left). Cockshott initially used Gaussian patterns for this project, but found that fractal patterns worked better...and were also more beautiful. Click here to learn more. # A More Mathematical Explanation Note: understanding of this explanation requires: *Statistics, Calculus ## Basic Recursive Method The featured image at the top of this page was created using the basic recursive method that will be described in this section. The canvas begins with rectangles of various sizes and colors, each of which undergo this basic recursive method. • Each rectangle is split into sub-rectangles ($a\,$ and $b\,$) by a random horizontal or vertical line ($\overline{PQ}\,$). • The random offset value used to decrease or increase the brightness level of each rectangle is determined randomly, but the mean of the probability density function of the random offset value is proportional to the square root of the area of each sub-rectangle. Algebraically, If a sub-rectangle has sides length $l\,$ and width $w\,$, And the mean of the probability density function $f(x)\,$ of the random offset value (X) is $E(X) = \int_{-\infty}^{\infty} x f(x) dx$, Then $\int_{-\infty}^{\infty} x f(x) dx \propto \sqrt{l * w}$ • The random offset value is then added to the brightness level of sub-rectangle $a\,$ and subtracted from the brightness level of sub-rectangle $b\,$ brightness to produce the next progression of the recursion method. The random value is designated such that, on average, the smaller sub-rectangle will have a smaller brightness offset than larger sub-rectangle. This method is repeated continuously to create a random fractal. ## Inclined Recursive Method - a more painterly effect Random Fractal created using the basic method Random Fractal with higher 'k Random Fractal with lower 'k' In this method, each rectangle is again split into two sub-rectangles by random dividing line $\overline{PQ}\,$. However, instead of using a random offset value to directly increase or decrease the brightness of the two sub-rectangles, there is an inclined change in brightness. The random variable used in this procedure has a probability density function with: • a mean of $E(x)= 0\,$ • a standard deviation of $\sigma\approx k|\overline{PQ}|\,$, where k is a constant. The brightness of the dividing line itself is increased or decreased by the random variable, and the sub-rectangles $a\,$ and $b\,$ then form inclined plans in brightness that gradually increase or decrease to match the brightness of the dividing line. Similar to the basic method, the random variable is picked such that, on average, the smaller rectangles will have a smaller brightness offset than rectangles. Since k reflects the degree of randomness of the brightness of the color in each sub-rectangle, a higher k produces a more random fractal. The image on the above has a higher k constant than the image on the below. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # References Paul Cockshott, Paul Cockshott # Future Directions for this Page • An applet letting users pick a few starting colors to start the random fractal and watch it produce an image, or even let them change k. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8996798992156982, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/186265/if-kappa-is-a-cardinal-aleph-is-any-aleph-number-and-if-kappa-leq-a
# If $\kappa$ is a cardinal, $\aleph$ is any $\aleph$-number, and if $\kappa\leq\aleph$ then $\kappa$ can be well ordered as well. I'm having trouble understanding the statement: If $\kappa$ is a cardinal, $\aleph$ is any $\aleph$-number, and if $\kappa\leq\aleph$ then $\kappa$ can be well ordered as well. I understand the concept of well-ordering a set of cardinal numbers (any set of cardinal numbers is in fact well-ordered by the relation $\le$) but I'm not sure if I have a proper understanding of what exactly it means to well-order an individual cardinal number. I know that to each cardinal number there is associated a unique ordinal number and an ordinal number is set consisting of ordinal numbers strictly less than it so it's well-ordered and so to say $\kappa$ is well-ordered, is all that we mean is that the ordinal number that this cardinal numbers is associated to is well-ordered? If what I'm saying is correct, aren't all cardinal well-ordered? What do we need this condition for then: $\kappa \le \aleph$ (where $\aleph$ is a well-orderable cardinal)? Sorry if I'm confusing you too. - 1 Are you possibly working without the assumption of the Axiom of Choice? – Arthur Fischer Aug 24 '12 at 11:46 – Mark Aug 24 '12 at 11:54 2 If you're asking about a detail in some previous question, you should say so in your question. – Chris Eagle Aug 24 '12 at 12:00 ## 1 Answer Assuming the axiom of choice cardinals are exactly initial ordinals, that is an ordinal which is not bijectible with any of its members. However without the axiom of choice there might be non well-orderable sets. For non well-orderable sets we define cardinality in a slightly different way. As it turns out cardinals are partially ordered in ZF, and the fact that they are totally ordered is equivalent to AC. The partial order is defined by injections between sets, and it is not hard to see that if cardinality is invariant under bijections then the partial order over the sets translates well to a partial ordering of the cardinals. When we say that $\kappa$ is a well-ordered cardinal we mean that $\kappa$ is an ordinal. When we say that $\kappa$ is a general cardinal we mean that it represents the cardinality of an arbitrary set, which may not be well-orderable. To say that $\kappa\leq\aleph_\alpha$ for some $\alpha$, is to say that every set $K$ such that $|K|=\kappa$ has an injection into the $\alpha$-th initial ordinal. If a set $K$ has an injection into an ordinal it inherits the well-ordering of the ordinal, namely: If $f\colon K\to\alpha$ is an injective function into an ordinal then $x\prec y\iff f(x)\in f(y)$ is a well-ordering of $K$. To summarize, to say that $\kappa$ is an arbitrary cardinal and $\kappa\leq\aleph$ for some $\aleph$ number, is to say that any set of cardinality $\kappa$ can in fact be well-ordered, therefore by the definition of cardinals $\kappa$ itself is an ordinal. To read more: - Thank you very much for your help. – Mark Aug 24 '12 at 13:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9491482377052307, "perplexity_flag": "head"}
http://mathoverflow.net/questions/120694/how-are-modal-logic-and-graph-theory-related/120710
## How are Modal Logic and Graph Theory related? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am currently taking a graduate logic course on Modal Logic and I can't help notice that there are a certain class of graphs characterized by the modal axioms such as (4) $\Box p \rightarrow \Box \Box p$, (5) $\Diamond p \rightarrow \Box \Diamond p$, or (B) $p \rightarrow \Box \Diamond p$ which can characterize frames as being transitive, Euclidean, and symmetric, respectively. In general, I notice many similarities between the models used in Modal Logic and the graphs in Graph Theory and I'm wondering if anyone knows if there are applications of Modal Logic to Graph Theory, or if one subject might be a special case of the other? In any case, if anyone has studied this before or knows of any references on the interplay between Modal Logic and Graph Theory I would be very interested to read about it, and if it has not been studied before then I would be interested of any ideas regarding what open research problems could be stated to tackle the correspondence between these two topics. (A category theory perspective on this interplay would also be very interesting) - 2 One category-theoretic approach to this is that Kripke models of modal logic can be seen as sheaf/presheaf models, with the modalities often coming from sheafification for some Grothendieck coverage (equivalently, some Lawvere-Tierney closure operator). I can’t off the top of my head remember where is a good source for reading up on this, but hopefully those keywords should at least be useful for searching with. – Peter LeFanu Lumsdaine Feb 3 at 19:28 1 The answers below have mentioned that not every first order definable class of graphs is definable by a formula of propositional modal logic. It might be worth mentioning that first order logic is the smallest extension of modal logic endowed with nominals (variables whose semantics is a singleton) and the universal modality (allowing one to assert truth at every state) that has interpolation. Also, modal logic with nominals and the universal modality is decidable. See Balder Ten Cate's thesis hylo.loria.fr/content/papers/files/phd-thesis.pdf. – Rob Myers Feb 4 at 18:59 @Peter LeFanu Lumsdaine: Your comment is very interesting to me and I can't find many references on the topic you alluded to. Have you thought of anything I could look up regarding this? Possibly a survey article, or at least a journal article which defines the words you are using in the context of modal logic? Thanks – Samuel Reid Feb 6 at 17:16 I believe the book "Toposes and local set theories" treats this to some extent, in chapter 6; I think Maclane/Moerdijk also goes into this, albeit at a much higher level of abstraction. – Noah S Feb 22 at 2:35 ## 4 Answers The relationship between modal logic and graph theory has, indeed, been studied before. Peter mentioned sheaf models in the comments; I want to mention a more classical-logic-y perspective. (First, let me note that when we say that $\phi$ characterizes a class of frames $V$, we mean that for every frame in $V$, and every valuation on that frame, $\phi$ is true, i.e., $\phi$ is a validity on every frame in $V$. There are frames together with valuations which satisfy, e.g., $\square p\implies \square \square p$ without the frame being transitive.) It's a natural question to ask which properties of frames can be defined by propositional modal formulas. For example, as you mentioned in the question, we can characterize transitivity by a single modal formula. It turns out that the class of properties of frames which can be captured by modal formulas is substantially larger than the class of first-order-definable properties. Blackburn, de Rijke, and Venema's book ("Modal logic") gives the example of the Lob formula: $$\square (\square p\implies p)\implies \square p$$ They show that this formula is a validity in precisely those frames in which the relation $R$ is transitive and well-founded (although they use the term "converse well-founded"). By a compactness argument, this class of frames is not first-order axiomatizable. A result of Goldblatt and Thomason in 1974 showed that "a first-order frame property is modally definable iff it is preserved under taking generated subframes, p-morphic frame images, disjoint unions, and inverse ultrafilter extensions." I don't really understand what all that means, but at the very least we can take away that not every first-order property of frames is characterized by a (propositional) modal formula. In the other direction, since the definition of "valid modal formula" is second-order, it's clear that any class of frames which can be captured by a modal formula is definable in second-order logic. I recall a paper on the strength of modal logic that showed that a sort of converse to this held, despite the converse itself failing (since, as mentioned above, not even every first-order property of frames is modally definable) but I can't track it down at the moment. EDIT: Emil found it - it's "Reduction of second-order logic to modal logic" by S. K. Thomason, and a (very poor) copy can be found here: http://onlinelibrary.wiley.com/doi/10.1002/malq.19750210114/abstract. In general, the book "Modal logics" by Chagrov and Zakharyaschev is probably the book to look at. I don't know how up-to-date it is anymore, but it seems to include every perspective on modal logics that I've heard of, short of the category-theoretic aspects. EDIT: Another aspect, which I didn't think of at first, is given by alternate interpretations of modalities. For example, suppose we interpret $\square p$ as meaning "more than half of visible nodes satisfy $p$," instead of "all visible nodes satisfy $p$;" or some other interpretation. Under each interpretation, the classes of graphs we can charaterize by modal formulas changes; and while any specific alternate interpretation is probably not too interesting, general tools for studying that would probably be very deep and valuable. I don't know of any work on this, but I suspect it's been done before; the closest related source I can find at present is the extended abstract of "The Modal Logic of Probability" (Heifetz, Mongin) which seems vaguely along these lines. - Well, $R\subseteq X^2$ is well-founded if every nonempty subset of $X$ has an $R$-minimal element, and converse well-founded if every such set has an $R$-maximal element. These are distinct properties, and GL corresponds to transitive frames with the latter property. As for the connection to second-order logic: modally definable second-order properties are indeed rather special (e.g., they are monadic $\Pi^1_1$, and they are preserved by the four operations listed), but the “sort of converse” you mention might be the result (due to Thomason, IIRC) that validity in full second-order logic ... – Emil Jeřábek Feb 4 at 13:01 ... is recursively reducible to the relation “$\psi$ is valid in every Kripke frame in which $\phi$ is valid”. – Emil Jeřábek Feb 4 at 13:02 It’s S. K. Thomason, Reduction of second-order logic to modal logic, Zeitschrift für mathematische Logik und Grundlagen der Mathematik 21 (1975), no. 1, 107–114. One can in fact fix $\phi$, so the reduction is to Kripke validity in a particular finitely axiomatized modal logic. (Thomason only states it for monadic second-order logic, but as far as I can see, this makes no difference, as one can encode a pairing function in the binary relation.) – Emil Jeřábek Feb 4 at 13:56 @Emil: thanks for the citation! I spent a while looking for that paper with no luck. Re: your first point on terminology, I recall learning the opposite convention, that $R$ being well-founded meant that every $R$-increasing sequence terminates, but looking through my notes/books I can't see where I got that convention; so maybe I'm just conflating the picture with that of descriptive set theory (where, if your trees grow upwards, a tree is well-founded if every increasing chain terminates). – Noah S Feb 4 at 20:00 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The ability of modal assertions to define natural and interesting classes of frames (or digraphs) is indeed intensely studied and constitutes one of the principal perpsectives of the subject, pervasive in all the literature and textbooks. Indeed, I heard Blackburn assert at a conference talk last fall that one should think about modal assertions mainly as a way of describing certain classes of graphs. Any of the standard reference texts on modal logic will tell you that: • the modal theory S5 characterizes the equivalence relations; • the modal theory S4.3 characterizes the linear pre-orders; • the modal theory S4.2 characterizes the directed partial pre-orders; • the modal theory S4 characterizes the partial pre-orders; • And so on. There are numerous instances of this phenomenon for various logics, and modal logicians are particularly interested in logics with the finite frame property, which are those definable as arising from a class of finite frames. In some of my recent work, Structural connections between a forcing class and its modal logic, for example, we have been looking at all those logics and also what we call S4.tBA, topless-Boolean-algebra logic, which is characterized as the assertions true in every finite topless pre-Boolean algebra (a finite pre-Boolean algebra whose maximal cluster has been removed). We keep being pushed toward the idea that this may be the modal logic of class forcing, and also of c.c.c. forcing. The connection between the modal assertions and the nature of the frames is exploited throughout the work. - +1 for the unexpected appearance of Medvedev’s logic. (A minor quibble: since ML is not known to [and seems not to] coincide with the logic of arbitrary topless Boolean algebras, this applies to S4.tBA as well. That is, you shouldn’t drop “finite” from the definition.) – Emil Jeřábek Feb 4 at 15:52 Thanks, Emil, I have inserted "finite", and this is indeed how we define it in the paper. This logic seems to be related to the modal logic of class forcing, and also curiously of c.c.c. forcing, but we can't yet settle either case exactly. – Joel David Hamkins Feb 4 at 18:19 On page 724 the book "Handbook of Modal Logic" contains the phrase "modal logics are merely sublogics of appropriate monadic second-order logic" therefore you might be interested in the book "Graph Structure and Monadic Second-Order Logic" by Bruno Courcelle and Joost Engelfriet. - Maybe look at arrow logic? - Welcome to MO, Richard! – Joel David Hamkins Mar 14 at 1:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394124746322632, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/65082/list
## Return to Answer 2 added remarks I don't know whether this is useful in practice, but the straightforward generalization of your 0-connectedness criterion to a 1-connectedness criterion goes as follows. Any 1-cycle of the form $[(a_0, a_1), (a_1, a_2), \ldots, (a_k, a_0)]$ can be transformed into the trivial (empty) 1-cycle by a sequence of the following moves: • $[\ldots, (a, b), (b,c),\ldots] \to [\ldots, (a,c),\ldots]$ if $a$, $b$ and $c$ are mutually disjoint (i.e. they correspond to a 2-cell in the complex) • $[\ldots, (a,c),\ldots] \to [\ldots, (a, b), (b,c),\ldots]$ if $a$, $b$ and $c$ are mutually disjoint • $[\ldots, (a, b), (b,b'),\ldots] \to [\ldots, (a, b),\ldots]$ if $b$ and $b'$ are parallel (remove degenerate 1-cell) • $[\ldots, (a, b), \ldots] \to [\ldots, (a, b),(b,b'),\ldots]$ if $b$ and $b'$ are parallel (insert degenerate 1-cell) • $[(a, b), (b,c), (c, a)] \to [\;\;]$ (empty) if $a$, $b$ and $c$ are mutually disjoint The basic idea is that a triangulated disk is "shellable" in some sense. There should be a similar criterion for general $k$-connectedness (in terms of $k$-dimensional Pachner moves on triangulations of the $k$-sphere), but I haven't thought carefully about the shellability issues in higher dimensions. EDIT: I think that allowing degenerate simplices (i.e. parallel disjoint curves) means that one doesn't need to worry about shellability. In case it wasn't clear what I meant by Pachner moves, here they are for $k=2$. The context is a triangulation of the 2-sphere where the vertices are labeled by curves and the curves at the vertices of a $j$-simplex ($j=1,2$) are mutually disjoint. • Two adjacent triangles $(a,b,c),(b,c,d)$ can be replaced by $(a,d,b), (a,d,c)$ if $a$ and $d$ are disjoint. • A single triangle $(a,b,c)$ can be replaced by three adjacent triangles $(a,b,d),(a,c,d),(b,c,d)$ if $d$ is disjoint from $a$, $b$ and $c$ (and we can also do the reverse move, replacing three triangles by one). 1 I don't know whether this is useful in practice, but the straightforward generalization of your 0-connectedness criterion to a 1-connectedness criterion goes as follows. Any 1-cycle of the form $[(a_0, a_1), (a_1, a_2), \ldots, (a_k, a_0)]$ can be transformed into the trivial (empty) 1-cycle by a sequence of the following moves: • $[\ldots, (a, b), (b,c),\ldots] \to [\ldots, (a,c),\ldots]$ if $a$, $b$ and $c$ are mutually disjoint (i.e. they correspond to a 2-cell in the complex) • $[\ldots, (a,c),\ldots] \to [\ldots, (a, b), (b,c),\ldots]$ if $a$, $b$ and $c$ are mutually disjoint • $[\ldots, (a, b), (b,b'),\ldots] \to [\ldots, (a, b),\ldots]$ if $b$ and $b'$ are parallel (remove degenerate 1-cell) • $[\ldots, (a, b), \ldots] \to [\ldots, (a, b),(b,b'),\ldots]$ if $b$ and $b'$ are parallel (insert degenerate 1-cell) • $[(a, b), (b,c), (c, a)] \to [\;\;]$ (empty) if $a$, $b$ and $c$ are mutually disjoint The basic idea is that a triangulated disk is "shellable" in some sense. There should be a similar criterion for general $k$-connectedness (in terms of $k$-dimensional Pachner moves on triangulations of the $k$-sphere), but I haven't thought carefully about the shellability issues in higher dimensions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329537749290466, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/259805/how-does-this-textbook-go-from-this-step-to-the-next-im-very-confused
# How does this textbook go from this step to the next? I'm very confused. Here's a picture detailing the question: In the second half of the picture where it says $$1 = 4 - 1 \cdot (11 - 2 \cdot 4)\\ 1 = 3 \cdot 4 - 1 \cdot 11$$ How did they jump from that first equation to the next? Where'd the $3$ come from? - ## 1 Answer You have $$4-1\cdot(11-2\cdot 4)=4-1\cdot 11+2\cdot 4=3\cdot 4-1\cdot 11$$ by distributing the $-1$ in front of $(11-2\cdot 4)$ and collecting multiples of $4$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.961260199546814, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/47969/einsteins-equations-as-a-dirichlet-boundary-problem
# Einstein's equations as a Dirichlet boundary problem Can Einstein's equations in vacuum $R_{ab} - \frac{1}{2}Rg_{ab} + \Lambda g_{ab}= 0$ be treated as a Dirichlet problem? I am thinking of something along those lines: Consider a compact manifold $M$ with boundary $\partial M$. For a given metric $g_{ab}$ on $\partial M$, does a unique (up to diffeomorphisms) solution to Einstein's equations on $M$ exist? Are there any constraints on the metric on the boundary? Einstein's equation can be recovered by looking for critical points in the Einstein-Hilbert action, but I don't know enough about variational problems to deduce anything about existence or uniqueness. The fact that general relativity is generally covariant also seems to complicate things, since this leads to a huge coordinate freedom. Does this depend on the dimension of the manifold or can a general statement be made? I am mostly interested in the three and four dimensional cases. Note that I am not interested in the initial value formulation usually considered. - 1 "Einstein's equation can be recovered by minimizing the Einstein-Hilbert action" false! We look for critical points, but they don't have to be (local) extrema. – Willie Wong Jan 9 at 17:14 I corrected the statement. – Friedrich Jan 9 at 18:09 ## 1 Answer In the Lorentzian case: I am not aware of anyone studying it, and don't know explicit counterexamples off-hand. But I have doubts on the uniqueness. With the Lorentzian case, the nature candidate to draw comparisons with is the wave equation. And we see that on something as simple as the unit square $[0,1]\times [0,1]$, the wave equation with vanishing Dirichlet boundary condition admits solutions $$(\partial_t^2 - \partial_x^2)u_k(t,x) = 0 \qquad u_k(t,x) = \sin (2\pi k x)\sin(2\pi k t)$$ Similar constructions can be made in arbitrary dimensions for the wave equation. In general for hyperbolic PDEs, the Dirichlet boundary value problem tend not to be well-posed. In the Riemannian case: there has been a lot of interest in the mathematical community about this. To the best of my knowledge the Dirichlet problem for Einstein manifolds are still poorly understood. A starting place to do literature search would be Michael Anderson's paper on the subject. - Thank you for the reference! The wave equation example is a very special case since this only works for rectangles with rational width-to-height ratio. I guess that these cases will also arise in general relativity but maybe there is still a well posed Dirichlet problem for most choices of boundaries. – Friedrich Jan 9 at 18:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258869886398315, "perplexity_flag": "head"}
http://lukepalmer.wordpress.com/
# Luke Palmer Functional programming and mathematical philosophy with musical interludes # Polyamory and Respect I have been in an open, polyamorous relationship with my partner Amanda for about a year and a half. The relationship began as open for somewhat coincidental reasons, but over its course, I have developed a respect for polyamory — an understanding of why it makes sense for me, and why, I suspect, I will want my future relationships to be open as well1. And it is not for the reasons that most people think. For the first time in the course of the relationship, I’m currently being intimate with someone else. However, I was supportive of polyamory before I had taken advantage of its freedoms, even though Amanda was seeing other people reasonably often. The question is: why? Why would I put myself in such a position? Why would I allow Amanda to sleep with other people while she is with me? The key lies in a word of that final question — “allow”. To me, a healthy relationship is founded on mutual respect. There are many relationships which are not, but I find the most fulfillment from a relationship which is a coming together of two whole people with respect for each other. Anything else, to me, is just a fling (maybe a long-term one). So, under the supposition that I respect my partner, what does it mean to “allow” something? More pointedly, what does it mean to “disallow” something? Both allowing and disallowing suppose that I have the power to make decisions for her. It supposes that I am informed enough, without even being present, to make the judgment call about whether her actions were right. In a traditional monogamous setting, I have a wholly un-nuanced view of the situation — if she has slept with someone else, she has made the wrong choice, and I, therefore, have been wronged, and I (with the assistance of social norms) am the one who has decided that. Let’s imagine a polyamorous situation to help get to the heart of this. Let’s say that she met a new partner, and asked me if it’s okay if she sleeps with them. I will not respond with yes or no. She has offered me the power (and responsibility) to decide the best course of action for her, and I feel it necessary not to accept it. In accordance with my values, I can’t accept that power for anyone but myself: it would be a disservice to us both. However, I don’t mean to say that there are never any emotions that come with it, or that if there are I have an obligation to bury them. Indeed, I often get jealous and feel hurt when she is with someone else. But as a partner, I want to understand. Why did what she did make sense to her? How did she perceive that would affect me? — knowing that I am considered in her decision-making process is important to me. I will communicate how it actually affected me. Perhaps I spent the night alone feeling shitty — it’s important for her to know that, to take that possibility into account next time she makes a decision, and it’s important for me to understand that I am still alive and that we still love each other. But the key is that, because of respect, I give her the benefit of the doubt that she made the best choice she had — I just want to understand her reasoning, and probably be reassured that she still cares — which she always has. There are certain “codes” that I see as being very powerful, as leading to a stronger and more aligned internal experience. One of these is honesty — I am committed to always being open & honest (in a more nuanced way than I have been in the past). This is not because honesty in itself is “right”, but because integrity (i.e. always doing what I feel is right) is a quality that is important to me, and I have found that honesty is a code that is easy to verify (i.e. it is easy for me to know if I am abiding by it), which leads to integrity. This is because if I do something which I feel is wrong, I learn that, because of my code of open honesty, I will need to tell someone that I felt what I did was wrong. And that pressure is huge — I can no longer keep it to myself, now I need to show others about my lack of integrity when it happens. This pressure very quickly causes me to start acting with integrity. In the same way, I see polyamory as a code which is easy to verify, which leads to respect as a consequence, and respect for my partner is something I value. Jealousy happens — when she talks to someone I can tell she thinks is attractive, when she stays out later than I expected her, when she tells me she has or had a crush on someone. But I know that we are in an open relationship — we have agreed that being attracted to others, even to the point of acting on it, is okay, and therefore my feeling of jealousy cannot be instantly transformed into a feeling of righteousness and being wronged. Hence, I have to consider the larger situation — I have to see where she is coming from, I have to understand her and her choices, I have to know her better. And in doing so I understand her values, her wishes, her way of being, her way of relating to others — and such a deep understanding leads me to respect her. I have not felt such a deep respect for anyone else I have ever been in a relationship with, and I think the openness of our relationship has been a major factor in that. Further, polyamory leads to more communication and strength in our relationship. Consider “cheating” in a monogamous relationship. Let’s say I am in a monogamous relationship with my partner and, in a flush of sexually-excited irrationality I slept with someone else. I still love my partner very much and want to be with her, and we have a good, mutually supportive relationship, but I just made a mistake. (The idea that I could sleep with someone else while still being in love with her may seem impossible to some; that idea is worth examining — consider these prompts: masturbation, past relationships, fantasizing.) The question is, do I share my mistake with her? If I do share, it’s very likely that the relationship will end by social contract — many consider cheating to be an unforgivable offense. I don’t want the relationship to end, because I still love her and want to be with her. If I don’t share, I turn one wrong into two, and eventually many — not only have I wronged her with my actions, I wrong her by lying once about it — and, as lies are, probably many more times to cover up the first one. So not sharing is incompatible with my respect for her, and sharing is incompatible with my love and desire to be with her. Would it not be easier for everyone if I felt free to share my mistake, if I were not in this terrible bind after making it? With the roles reversed, what would it say about how much I care if I were willing to put my partner in such a bind? Letting go of the moral attachment to fidelity allows this situation easily to be a conversation — she can tell me how it affected her, I can understand that and that may inform my desire not to be so reckless again. Perhaps the conversation will reveal something about our relationship dynamic that needs attention, or perhaps something that is secretly making us both unhappy (one of the possible causes of sleeping with someone else). In that sense we can make a plan to repair it, or possibly we will mutually agree it is in both of our best interests to end the relationship, allowing us to be friends afterward, feeling sadness for our loss but not hurt and anger, because we both know that it was the right decision. In the case that the relationship does not end, the conversation may have revealed a deep problem which we are now on the road to solving, strengthening the relationship and bringing us closer. And maybe it was no big deal, and we understand that as sexual beings sometimes we just need to feel attractive and get our rocks off, and the relationship has not been harmed. All of these are preferable to an abrupt end due to an objective wrong, in which one person feels deeply guilty and the other feels deeply wounded. There are things which I will only briefly mention: for example, it is freeing to know that a friendship/relationship with someone other than my partner can develop in whatever way seems natural, without worrying if every action has crossed the line. This freedom allows me to get closer to others in my life, even if their gender allows some sexual tension, which brings me more fulfillment and happiness. In my experience, even though I like this other woman a lot, it has not in the least diminished the love I feel for Amanda, and experiencing that helps me see that it is probably the same for her when she is with someone else. In fact, since she has asked me for more reassurance now, I am verbalizing why I love her more, thus reminding myself and strengthening my sense of love for her. Where does the idea that love is a finite resource come from? These are the reasons why polyamory makes sense to me as a way of conducting myself in relationship. It leads to more honest communication (and therefore more integrity), more mutual understanding and respect, and ultimately a stronger relationship. I see traditional monogamy as a way to defend yourself from scary thoughts of abandonment, but the cost is a dynamic in which it is possible to justify a sense of ownership over your partner, controlling them and taking away their free agency. Is that really worth it? 1One reason is that I get to have future relationships without first ending this wonderful one. ### Share this: Posted in: Uncategorized | Tagged: love, personal, polyamory # The Plan Last September, I decided that it was time to get a programming job again. After two months of trying to find paid work (of any kind, \$10 would have been great!) as a composer, I realized that it’s really hard. There are a lot of people willing to work for free, and without much of a scoring portfolio (as opposed to the “pure music” I do) I have no way to distinguish myself to the studios that have a budget. Also, a lot of games want orchestral scores, and I don’t have the hardware and software I need to make convincing-sounding synthetic orchestral scores. Also, I’m sure once I get the necessary hardware and software, I will need time to practice with it. In short, I needed money and time. I am extremely fortunate to have, in my free-flowing way, stumbled onto a skill that is valued by the economy, and so I decided it was once again time to utilize that skill to achieve my other goals. I planned to live reasonably cheaply, save up money so that I can buy equipment and support myself for enough time to build up a portfolio by doing free projects. Now I have been programming for Clozure for almost six months. As far as jobs go, it’s great. I get to work in my favorite language, Haskell, and they give me enough freedom to experiment with designs and come up with solutions that not only work, but that I would even consider good. My fear of programming jobs was based on having jobs where I constantly have to compromise my values, either by working in crappy languages or on startup-style timelines where there is no time to lose. With this job, I feel reunited with my love of software, and my inspirations for developer support tools have been once again ignited. And so I have amended the plan: after I have saved enough money to support myself for several years, I will not only attempt to bootstrap a career composing, but dedicate my current work week to making a reality the software ideas which have been floating around in my head for half a decade. This prospect really excites me — the reason I have not been able to make my ideas is mostly the time pressure: there’s was always something else I should be doing, and so I always felt guilty working on my pet projects. I wonder, what am I capable of if my pet projects are the main thing? I want to revive CodeCatalog. Max and I lost steam on that project for a number of reasons. 1. Due to family pressure, I returned to school. 2. I fell in love with a girl and got my heart all broken. That can be kind of a downer. 3. The priorities of the project compromised my vision. We were attempting to use modern wisdom to make the project successful: first impressions and intuitive usability came first. Our focus was on making it pretty and satisfying to use (which took a long time since neither of us were experienced web front-end developers), and that required me to strip off the most interesting parts of the project because noobs wouldn’t immediately understand it. So I want to re-orient (3) to make it more satisfying for me. I want to allow myself to make the large strides that I envisage rather than baby-stepping toward success — to encourage myself to use my own talents in design and abstraction rather than trying to be a front-end person, to emphasize the exciting parts (what Audrey Tang calles -Ofun). By funding myself, I will not feel the guilt that comes with working on a project at the same time as (1). I can do no more than hope that something like (2) doesn’t happen. (I have a wonderful, stable and supportive relationship right now, so if that continues, that’d cover it :-) I have many ideas; the reason I want to return to CodeCatalog in particular is mainly because I have identified most of my ideas as aspects of this project. My specific fancies change frequently (usually to things I have thought about before but never implemented), and so by focusing on this project in a researchy rather than producty way, I can entertain them while still working toward a larger goal and eventually benefitting the community. Here is a summary of some ideas that fit in the CodeCatalog umbrella (just because I’m excited and want to remember): • Inter-project version control — I have always been frustrated by the inability of git and hg to merge two projects while still allowing interoperation with where they came from. The “project” quantum seems arbitrary, and I want to globalize it. • Package adapters — evolving the interface of a package without breaking users of the old interface by rewriting the old package in terms of the new one. There is a great deal that can be done automatically in this area with sufficient knowledge about the meaning of changes. I talked with Michael Sloan about this some, and some of the resulting ideas are contained in this writeup. • Informal checked documentation — documenting the assumptions of code in a machine-readable semi-formal language, to get the computer to pair-program with you (e.g. you write a division x/y and you have no y /= 0 assumption in scope, you’d get a “documentation obligation” to explain in english why y can’t be 0). • Structural editing — coding by transforming valid syntax trees. Yes it’d be cool, but the main reason it’s compelling to me is in its synergy with other features. Once you have the notion of focusing on expressions, holes with contextual information (a la Agda), semi-automatic creation of package and data-type adapters, smarter version control (e.g. a change might rename all references to an identifier, even the ones that weren’t there when the change was made) all come as natural extensions to the idea. I think the challenge for me will be to focus on one of these for long enough to make it cool before getting distracted by another. My plan for that is to set short-term goals here on my blog and use it to keep myself in check. I am considering involving other people in my project as a way to keep myself focused (i.e. maybe I can make a little mini-kickstarter in which my devotees can pledge small amounts in exchange for me completing a specific goal on time). This is all 12-18 months away, which feels like a long time, but in the grand scheme is not that long in exchange for what I see as the potential of this endeavor. I’m just excited and couldn’t help but to think about it and get pumped up. Thanks for reading! Oh, despite the date, this is totally not an April Fools joke (as far as I know ;-). ### Share this: Posted in: Uncategorized | Tagged: code, codecatalog, haskell, music, work # Follow Your Nose Proofs We just had the first Categories for the Boulderite meetup, in which a bunch of people who don’t know category theory tried to teach it to each other. Some of the people there had not had very much experience with proofs, so getting “a proof” was hard even though the concepts weren’t very deep. I got the impression that those who had trouble mainly did because they did not yet know the “follow your nose” proof tactic which I learned in my first upper division math class in college. That tactic is so often used that most proofs completely omit it (i.e. assume that the reader is doing it) and skip to when it gets interesting. Having it spelled out for me in that class was very helpful. So here I shall repeat it, mostly for my fellow Categories members. Decide what to do based on a top-down analysis of the sentence you are trying to prove: Shape of Sentence Shape of Proof If P, then Q. (aka. P implies Q) Suppose P. <proof of Q> P if and only if Q (→) <proof of if P implies Q>. (←) <proof of Q implies P> For all x such that C(x), Q Given x. Suppose C(x). <proof of Q> There exists x such that Q. Let x = <something> (requires imagination). <proof of Q> P or Q Either <proof of P> or <proof of Q> (or sometimes something tricksier like assume not P, <proof of Q>) P and Q (1) <proof of P>. (2) <proof of Q>. not P Assume P. <find contradiction> (requires imagination) X = Y Reduce X and Y by known equalities one step at a time (whichever side is easier first). Or sometimes there are definitions / lemmas that reduce equality to something else. Something really obvious (like X = X, or 0 ≤ n where n is a natural, etc.) Say “obvious” or “trivial” and you’re done. Something else Find definition or lemma, substitute it in, continue. Along the way, you will find that you need to use the things you have supposed. So there is another table for how you can use assumptions. Shape of assumption Standard usage If P, then Q (aka P implies Q) Prove P. Then you get to use Q. P if and only if Q P and Q are equivalent. Prove one, you get the other. For all x such that C(x), P(x) Prove C(y) for some y that you have, then you get to use P(y). There exists x such that C(x) Use x and the fact that C(x) somehow (helpful, right? ;-). P and Q Therefore P / Therefore Q. P or Q If P then <goal>. If Q then <same goal>. (Or sometimes prove not P, then you know Q) not P Prove P. Then you’re done! (You have inconsistent assumptions, from which anything follows) X = Y If you are stuck and have an X somewhere in your goal, try substituting Y. And vice versa. Something obvious from your other assumptions. Throw it away, it doesn’t help you. Something else Find definition, substitute it in, continue. Let’s try some examples. First some definitions/lemmas to work with: Definition (extensionality): If X and Y are sets, then X = Y if and only if for all x, $x \in X$ if and only if $x \in Y$. Definition: $X \subseteq Y$ if and only if for every a, $a \in X$ implies $a \in Y$. Theorem: X = Y if and only if $X \subseteq Y$ and $Y \subseteq X$. • (→) Show X = Y implies $X \subseteq Y$ and $Y \subseteq X$. • Assume X = Y. Show $X \subseteq Y$ and $Y \subseteq X$. • Substitute: Show $X \subseteq X$ and $X \subseteq X$. • We’re done. • (←) Show $X \subseteq Y$ and $Y \subseteq X$ implies $X = Y$. • Assume $X \subseteq Y$ and $Y \subseteq X$. Show $X = Y$. • (expand definition of = by extensionality) • Show forall x, $x \in X$ if and only if $x \in Y$. • Given x. • (→) Show $x \in X$ implies $x \in Y$. • Follows from the definition of our assumption $X \subseteq Y$. • (←) Show $x \in Y$ implies $x \in X$. • Follows from the definition of our assumption $Y \subseteq X$. See how we are mechanically disassembling the statement we have to prove? Most proofs like this don’t take any deep insight, you just execute this algorithm. Such a process is assumed when reading and writing proofs, so in the real world you will see something more like the following proof: Proof. (→) trivial. (←) By extensionality, $x \in X$ implies $x \in Y$ since $X \subseteq Y$, and $x \in Y$ implies $x \in X$ since $Y \subseteq X$. We have left out saying that we are assuming things that you would naturally assume from the follow your nose proof. We have also left out the unfolding of definitions, except perhaps saying the name of the definition. But when just getting started proving things, it’s good to write out these steps in detail, because then you can see what you have to work with and where you are going. Then begin leaving out obvious steps as you become comfortable. We have also just justified a typical way to show that two sets are equal: show that they are subsets of each other. Let’s see one more example: Definition: Given sets A and B, a function f : A → B is a surjection if for every $y \in B$, there exists an $x \in A$ such that f(x) = y. Definition: Two functions f,g : A → B are equal if and only if for all $x \in A$, f(x) = g(x). Definition: $(g \circ f)(x) = g(f(x))$. Definition: For any set $A$, the identity $\mathit{Id}_A$ is defined by $\mathit{Id}_A(x) = x$. Theorem: Given f : A → B. If there exists f-1 : B → A such that $f \circ f^{-1} = \mathit{Id}_B$, then f is a surjection. • Given f : A → B. • Suppose there exists f-1 : B → A and $f \circ f^{-1} = \mathit{Id}_B$. Show f is a surjection. • By definition, show that for all $y \in B$, there exists $x \in A$ such that $f(x) = y$. • Given $y \in B$. Show there exists $x \in A$ such that $f(x) = y$. • Now we have to find an x in A. Well, we have $y \in B$ and a function from B to A, let’s try that: • Let $x = f^{-1}(y)$. Show $f(x) = y$. • Substitute: Show $f(f^{-1}(y)) = y$. • We know $f \circ f^{-1} = \mathit{Id}_B$, so by the definition of two functions being equal, we know $f(f^{-1}(y)) = \mathit{Id}_B(y) = y$, and we’re done. Again, notice how we are breaking up the task based on the structure of what we are trying to prove. The only non-mechanical things we did were to find x and apply the assumption that $f \circ f^{-1} = \mathit{Id}_B$. In fact, usually the interesting parts of a proof are giving values to “there exists” statements and using assumptions (in particular, saying what values you use “for all” assumptions with). Since those are the interesting parts, those are the only parts that an idiomatic proof would say: Proof. Given $y \in B$. Let $x = f^{-1}(y)$. $f(x) = f(f^{-1}(y)) = y$ since $f \circ f^{-1} = \mathit{Id}_A$. Remember to take it step-by-step; at each step, write down what you learned and what you are trying to prove, and try to make a little progress. These proofs are easy if you follow your nose. ### Share this: Posted in: Uncategorized | Tagged: math # Constructions on Typeclasses, Part 1: F-Algebras This post is rendered from literate Haskell. I recommend doing the exercises inline, so use the source. > {-# LANGUAGE DeriveFunctor > , DeriveFoldable > , DeriveTraversable > , TypeOperators #-} > > import Control.Applicative > import Data.Foldable > import Data.Traversable Certain kinds of typeclasses have some very regular instances. For example, it is obvious how to implement (Num a, Num b) => Num (a,b) and (Monoid a, Monoid b) => Monoid (a,b), and similarly if F is some applicative functor, (Num a) => Num (F a) and (Monoid a) => (Monoid F a) are obvious. Furthermore, these instances (and many others) seem to be obvious in the same way. (+) a b = (+) <\$> a <*> b mappend a b = mappend <\$> a <*> b fromInteger n = pure (fromInteger n) mempty = pure mempty And take them on pairs: (x,x') + (y,y') = (x + y, x' + y') (x,x') `mappend` (y,y') = (x `mappend` y, x' `mappend` y') fromInteger n = (fromInteger n, fromInteger n) mempty = (mempty , mempty) It would be straightforward for these cases to derive the necessary implementations from the type signature. However, it would be nice if there were a more abstract perspective, such that we didn’t have to inspect the type signature to find the operations – that they could arise from some other standard construction. Further, it is not quite as obvious from the the type signature how to automatically instantiate methods such as mconcat :: (Monoid m) => [m] -> m without making a special case for [], whereas hopefully a more abstract perspective would inform us what kinds of type constructors would be supported. In this post, we will see such an abstract perspective. It comes from (surprise!) category theory. I disclaim that I’m still a novice with category theory (but in the past few weeks I have gained competence by studying). So we will not get very deep into the theory, just enough to steal the useful concept and leave the rest behind. I welcome relevant insights from the more categorically educated in the comments. ## F-Algebras The unifying concept we will steal is the F-algebra. An F-algebra is a Functor f and a type a together with a function f a -> a. We can make this precise in Haskell: > type Algebra f a = f a -> a I claim that Num and Monoid instances are F-algebras over suitable functors. Look at the methods of Monoid: mempty :: m mappend :: m -> m -> m We need to find a functor f such that we can recover these two methods from a function of type f m -> m. With some squinting, we arrive at: > data MonoidF m > = MEmpty > | MAppend m m > > memptyF :: Algebra MonoidF m -> m > memptyF alg = alg MEmpty > > mappendF :: Algebra MonoidF m -> (m -> m -> m) > mappendF alg x y = alg (MAppend x y) Exercise 1: work out the functor NumF over which Num instances are F-algebras, and write the methods of Num in terms of it. Exercise 2: for each of the standard classes Eq, Read, Show, Bounded, and Integral, work out whether they are expressible as F-algebras. If so, give the functor; if not, explain or prove why not. Exercise 3: write a function toMonoidAlg which finds the MonoidF-algebra for a given instance m of the Monoid class. ## Combining Instances Motivated by the examples in the introduction, we can find the “instance” for pairs given instances for each of the components. > pairAlg :: (Functor t) => Algebra t a -> Algebra t b -> Algebra t (a,b) > pairAlg alga algb tab = (alga (fmap fst tab), algb (fmap snd tab)) Also, we hope we can find the instance for an applicative functor given an instance for its argument applicativeAlg :: (Functor t, Applicative f) => Algebra t a -> Algebra t (f a) but there turns out to be trouble: applicativeAlg alg tfa = ... We need to get our hands on an t a somehow, and all we have is a t (f a). This hints at something from the standard library: sequenceA :: (Traversible t, Applicative f) => t (f a) -> f (t a) which indicates that our functor needs more structure to implement applicativeAlg. > applicativeAlg :: (Traversable t, Applicative f) > => Algebra t a -> Algebra t (f a) > applicativeAlg alg tfa = fmap alg (sequenceA tfa) Now we should be able to answer the query from the beginning: Exercise 4: For what kinds of type constructors c is it possible to automatically derive instances for (a) pairs and (b) Applicatives for a typeclass with a method of type c a -> a. (e.g. mconcat :: [a] -> a). Demonstrate this with an implementation. ## Combining Classes Intuitively, joining the methods of two classes which are both expressible as F-algebras should give us another class expressible as an F-algebra. This is demonstrated by the following construction: > data (f :+: g) a = InL (f a) | InR (g a) > deriving (Functor, Foldable, Traversable) > > coproductAlg :: (Functor f, Functor g) > => Algebra f a -> Algebra g a -> Algebra (f :+: g) a > coproductAlg falg _ (InL fa) = falg fa > coproductAlg _ galg (InR ga) = galg ga So now we can model a subclass of both Num and Monoid by type NumMonoidF = NumF :+: MonoidF. Exercise 5: We hope to be able to recover Algebra NumF a from Algebra NumMonoidF a, demonstrating that the latter is in fact a subclass. Implement the necessary function(s). Exercise 6: Given the functor product definition > data (f :*: g) a = Pair (f a) (g a) > deriving (Functor, Foldable, Traversable) find a suitable combinator for forming algebras over a product functor. It may not have the same form as coproduct’s combinator! What would a typeclass formed by a product of two typeclasses interpreted as F-algebras look like? ## Free Constructions One of the neat things we can do with typeclasses expressed as F-algebras is form free monads over them – i.e. form the data type of a “syntax tree” over the methods of a class (with a given set of free variables). Begin with the free monad over a functor: > data Free f a > = Pure a > | Effect (f (Free f a)) > deriving (Functor, Foldable, Traversable) > > instance (Functor f) => Monad (Free f) where > return = Pure > Pure a >>= t = t a > Effect f >>= t = Effect (fmap (>>= t) f) (Church-encoding this gives better performance, but I’m using this version for expository purposes) Free f a can be interpreted as a syntax tree over the typeclass formed by f with free variables in a. This is also called an “initial algebra”, a term you may have heard thrown around in the Haskell community from time to time. We demonstrate that a free construction over a functor is a valid F-algebra for that functor: > initialAlgebra :: (Functor f) => Algebra f (Free f a) > initialAlgebra = Effect And that it is possible to “interpret” an initial algebra using any other F-algebra over that functor. > initiality :: (Functor f) => Algebra f a -> Free f a -> a > initiality alg (Pure a) = a > initiality alg (Effect f) = alg (fmap (initiality alg) f) Exercise 7: Give a monoid isomorphism (a bijection that preserves the monoid operations) between Free MonoidF and lists [], ignoring that Haskell allows infinitely large terms. Then, using an infinite term, show how this isomorphism fails. Next time: F-Coalgebras ### Share this: Posted in: Uncategorized | Tagged: category-theory, code, haskell # How GADTs inhibit abstraction This program ought to be well-behaved — it has no recursion (or recursion-encoding tricks), no undefined or error, no incomplete pattern matches, so we should expect our types to be theorems. And yet we can get inconsistent. What is going on here? Exercise: Identify the culprit before continuing. The problem lies in the interaction between GADTs and generalized newtype deriving. Generalized newtype deriving seems to be broken here — we created a type B which claims to be just like A including instances, but one of A‘s instances relied on it being exactly equal to A. And so we get a program which claims to have non-exhaustive patterns (in unSwitchB), even though the pattern we omitted should have been impossible. And this is not the worst that generalized newtype deriving can do. When combined with type families, it is possible to write unsafeCoerce. This has been known since GHC 6.7. In this post I intend to explore generalized newtype deriving and GADTs more deeply, from a more philosophical perspective, as opposed to just trying to plug this inconsistency. There are a few different forces at play, and by looking at them closely we will see some fundamental ideas about the meaning of types and type constructors. Generalized newtype deriving seems reasonable to us by appealing to an intuition: if I have a type with some structure, I can clone that structure into a new type — basically making a type synonym that is a bit stricter about the boundaries of the abstraction. But the trouble is that you can clone parts of the structure without other parts; e.g. if X is an applicative and a monad, and I declare newtype Y a = Y (X a) deriving (Monad), then go on to define a different Applicative instance, I have done something wrong. Monad and applicative are related, so you can’t just change them willy nilly as though they were independent variables. But at the very least it seems reasonable that you should be able to copy all the structure, essentially defining a type synonym but giving it a more rigorous abstraction boundary. But in Haskell, this is not possible, and that is because, with extensions such as GADTs and type families, not all of a type’s structure is clonable. I’m going to be talking a lot about abstraction. Although the kind of abstraction I mean here is simple, it is one of the fundamental things we do when designing software. To abstract a type is to take away some of its structure. We can abstract Integer to Nat by taking away the ability to make negatives — we still represent as Integer, but because the new type has strictly fewer operations (it must be fewer — after all we had to implement the operations somehow!) we know more about its elements, and finely tuning that knowledge is where good software engineering comes from. When implementing an abstraction, we must define its operations. An operation takes some stuff in terms of that abstraction and gives back some stuff in terms of that abstraction. Its implementation must usually use some of the structure of the underlying representation — we define addition on Nat by addition on Integer. We may take it for granted that we can do this; for example, we do not have trouble defining: sum :: [Nat] -> Nat even though we are not given any Nats directly, but instead under some type constructor ([]). One of the properties of type constructors that causes us to take this ability to abstract for granted is that if A and B are isomorphic (in a sense that will become clear in a moment), then F A and F B should also be isomorphic. Since we, the implementers of the abstraction, are in possession of an bijection between Nats and the Integers that represent them, we can use this property to implement whatever operations we need — if they could be implemented on Integer, they can be implemented on Nat. This isomorphism property looks like a weak version of saying that F is a Functor. Indeed, F is properly a functor from a category of isomorphisms in which A and B are objects. Every type constructor F is a functor from some category; which category specifically depends on the structure of F. F's flexibility to work with abstractions in its argument is determined by that category, so the more you can do to that category, the more you can do with F. Positive and negative data types have all of Hask as their source category, so any abstractions you make will continue to work nicely under them. Invariant functors like Endo require bijections, but fortunately when we use newtype to create abstractions, we have a bijection. This is where generalized newtype deriving gets its motivation -- we can just use that bijection to substitute the abstraction for its representation anywhere we like. But GADTs (and type families) are different. A functor like Switch b has an even smaller category as its domain: a discrete category. The only thing which is isomorphic to A in this category is A itself -- whether there is a bijection is irrelevant. This violates generalized newtype deriving's assumption that you can always use bijections to get from an abstraction to its representation and back. GADTs that rely on exact equality of types are completely inflexible in their argument, they do not permit abstractions. This, I claim, is bad -- you want to permit the user of your functor to make abstractions. (Aside: If you have a nice boundary around the constructors of the GADT so they cannot be observed directly, one way to do this when using GADTs is to simply insert a constructor that endows it with the necessary operation. E.g. if you want it to be a functor from Hask, just insert Fmap :: (a -> b) -> F a -> F b If you want it to be a functor from Mon (category of monoids), insert: Fmap :: (Monoid n) => MonoidHom m n -> F m -> F n (presumably F m already came with a `Monoid` dictionary). These, I believe, are free constructions -- giving your type the structure you want in the stupidest possible way, essentially saying "yeah it can do that" and leaving it to the consumers of the type to figure out how.) In any case, we are seeing something about GADTs specifically that simple data types do not have -- they can give a lot of different kinds of structure to their domain, and in particular they can distinguish specific types as fundamentally different from anything else, no matter how similarly they may behave. There is another way to see this: defining a GADT which mentions a particular type gives the mentioned type unclonable structure, such that generalized newtype deriving and other abstraction techniques which clone some of a type's structure no longer succeed. ### Share this: Posted in: Uncategorized | Tagged: abstraction, category-theory, code, haskell # DI Breakdown I’m having a philosophical breakdown of the software engineering variety. I’m writing a register allocation library for my current project at work, referencing a not-too-complex algorithm which, however, has many degrees of freedom.  Throughout the paper they talk about making various modifications to achieve different effects — tying variables to specific registers, brazenly pretending that a node is colorable when it looks like it isn’t (because it might work out in its favor), heuristics for choosing which nodes to simplify first, categorizing all the move instructions in the program to select from a smart, small set when the time comes to try to eliminate them.  I’m trying to characterize the algorithm so that those different selections can be made easily, and it is a wonderful puzzle. I also feel aesthetically stuck.   I am feeling too many choices in Haskell — do I take this option as a parameter, or do I stuff it in a reader monad?  Similarly, do I characterize this computation as living in the Cont monad, or do I simply take a continuation as a parameter?  When expressing a piece of a computation, do I return the “simplest” type which provides the necessary data, do I return a functor which informs how the piece is to be used, or do I just go ahead and traverse the final data structure right there?  What if the simplest type that gives the necessary information is vacuous, and all the information is in how it is used? You might be thinking to yourself, “yes, Luke, you are just designing software.”  But it feels more arbitrary than that — I have everything I want to say and I know how it fits together.  My physics professor always used to say “now we have to make a choice — which is bad, because we’re about to make the wrong one”.  He would manipulate the problem until every decision was forced.  I need a way to constrain my decisions, to find what might be seen as the unique most general type of each piece of this algorithm.  There are too many ways to say everything. ### Share this: Posted in: Uncategorized | Tagged: algorithms, code, dependency injection, haskell # A Gambler In Heaven A gambler has just lost all but one \$1 in Vegas and decides to go for a walk.  Unfortunately he gets hit by a bus but, having lived mostly a good life aside from the gambling, is shown God’s mercy and lands in heaven.  They only have one type of gambling in heaven, it is a simple choice-free game with the following rules: A coin is tossed.  If it comes up tails, you lose \$1.  If it comes up heads, your entire bankroll is tripled. The gambler only has the \$1 he had on him when he died (turns out you keep your money when you go to heaven).  Here is a possible outcome of his playing this game: • \$1 – H -> \$3 • \$3 – T -> \$2 • \$2 – H -> \$6 • \$6 – T -> \$5 • \$5 – T -> \$4 • \$4 – T -> \$3 • \$3 – T -> \$2 • \$2 – T -> \$1 • \$1 – T -> \$0 And thus he is broke. The question is this: starting with his \$1, what is the probability he will live the rest of eternity broke in heaven? The alternative, presumably, is that he spends eternity doing what he loves most: gambling.  Do all paths eventually lead to bankruptcy a la Gambler’s ruin, or is there a nonzero probability of playing forever? You may leave your ideas in the comments, and I will post a solution in a few days. ### Share this: Posted in: Uncategorized | Tagged: math, probability What do you say when you have nothing to say? What do you do when your song is a nice accompaniment to a vocal line, and there are no words to accompany? I could talk about my life. I could mention my new teaching job, the cosmic interference with my busking, the flood… those all seem so incidental. Maybe silence is okay. Maybe I am saying something — I am writing a lot of music, after all. I’m feeling pressure from Amanda (my girlfriend and closest friend) — not in any way that she is instigating, just a side-effect of who she is — to say something meaningful, something important. I can’t. I don’t feel like my ideas are important in that way, in the way that they are ready to jump from my mind into another’s and have any benefit. I think only vague half-truths: a strong conclusion, a value to hold on to, feels miles away. I know personal truths, I am feeling confident in them, and it is a great feeling, but words always miss the mark. They always make me seem either more certain or more uncertain than I am, with them I don’t know how to walk the fine line where I really communicate. And if I could . . . would I put it in a song; would I write it here? I don’t think I would be bothered if my music felt complete without words. But I have a couple of songs in the oven that are just begging for words, that’s musically obvious to me. The missing instrument is words. I see a symbol, a metaphor: my life for the song, the words for… what? But it does feel that way — my life has a great groove but is also missing something. Missing lyrics. I would normally argue that my lyricless music is saying something — it does have a message — but, like my thoughts and my truths, words cannot communicate it. But I’m incredulous. That argument doesn’t have the ring it used to. I – ### Share this: Posted in: Uncategorized | Tagged: music, personal # Why dream of being awake? To every action, give your whole self; I am wholly procrastinating, fully indecisive, completely half-listening. Mr. mindful, awake, clear-headed, be careful, pictures can be projected on the fog. We are all blind, stumbling pigeons, wholeheartedly. The most committed are those who believe they have conquered life — how would you say that a delusional maniac “doesn’t have his heart in it”? Then there are those of us who envy such commitment — to be stuck only wanting a delusion — is that a lesser or greater commitment? There is a transcendence here (, man). I want to experience my whole self, so I can’t just give up being lost and absent. My self includes my guilt, my self-judgments, my unacceptance of those judgments — no spiritual or psychological change I can make will do justice to my self. Nor will stagnation realize my true potential (a concept that makes the very same error). But we get trapped again. We can’t stop intending to change because it would not do justice to self; nor can we stop intending to stop intending to change. To lay a path to spiritual betterhood is to believe that you have, in some small way, failed to be a blind, stumbling pigeon. This is false but, as we have already covered, admirable. There’s nothing new to accept. This line of questioning is wrong. Self-acceptance is a vacuous goal. When that sinks in — when you really believe that — something changes inside of you. It plants the seed of the real self-acceptance, not that fuzzy-wuzzy kind you wanted. You’ll know when you have Achieved real self-acceptance because nothing happens, except maybe you will think and/or feel differently than you would have in some situations (it is unclear what that mechanism is). Then what? Well of course, self-acceptance is only one step along the path — after which there can be no more — the steps no longer look like steps, but flat step-like objects. But I have to ask again, now what? What do I do now? What is the next .. the next .. These are the chirpings of an analytical mind with nothing to analyze. Shhhhhhh… ### Share this: Posted in: Uncategorized | Tagged: experimental writing, philosophy, zen # Life as a Musician? So, it turns out I’m not dead. How about that? I have dropped out of school, and am busking for a living. It is tiring (especially when I forget to drink enough water), sometimes discouraging (when I play things to no response whatsoever or make \$5 in an hour), but mostly great. My job is making music! And more importantly, my job is making my music, or music I am in love with — although certain pieces tend to attract more tippers than others, so it’s not truly free (what is?). My grandmother contacted me telling me about a startup mixer so I could find a job. I don’t think she really understands my decision. I can understand that — she wants me to get a stable, well-paying job, have kids and a family, and go to church. The usual narrative. The other day I was idly contemplating being a father. Not now, of course. But I can see the draw; I can see that being a pretty special thing. The question is whether it is worth it to me. Sacrifice is part of love. But do I sacrifice for my child, or do I sacrifice a child (umm! — sacrifice having a child) for my other loves? That is not a question I am remotely prepared to answer. I used to think — perhaps I still do — that big questions like those aren’t really worth answering, at least not rationally. I suppose this “used to” is fairly recent, as I had spent a long time on them prior, and they led me nowhere but in circles of unfulfilled dustkicking. My self-image can be so limited at times, and the rational mind is a slave to its images. What I can really do, what I’m really made of, I perhaps thought, won’t be small enough to be so easily decided — it must be eased into, made part of myself through exploration and long, gradual growth. But the liberation I feel from this new occupation of mine has shown me that perhaps at points along this process such a life decision is valuable, that it can be a beacon that reminds me that I chose this because it was important to me — more important than anything else at one time — and so gives me something to hold onto in times of uncertainty or suffering. It sounds very compelling, doesn’t it? But I am still in the honeymoon phase of my relationship with my life as a musician, so the only thing I can be sure of is that my thoughts about it are distorted. And am I really good enough to make this a living? Maybe Boulder is the only place people appreciate public performances of amateur classical music. Maybe when I migrate for the winter I will be met with indifference or contempt, and I will be stuck in a new city with no job. Maybe when I improvise or play my originals people only tip me because I have brought the piano out, not because the music speaks to them in any deep way — I know that is not true, my second piano sonata is almost always met by applause, but it has been 10 years since I wrote that; do I still have it? A teenager passes by and plays most of the pieces I do — not as well, but not badly — and he will surpass me by my age. Will I ever have the guts to sing out there? A thousand fears and doubts dance their rite around my dream — all I can do is to go out there every day and hope it goes well. I think it’s proof that I’m alive. I pose this question to myself: would I rather be wildly successful in a software company, or wildly successful as a musician? The latter, by any metric. “Wildly” need not even appear. Standing on a plank and singing to the jury, my heart beating a thousand times a minute, with the conviction of a soldier — this outshines any vision of a successful software idea. I’m not leaving software. But my most exciting software ideas aren’t the kinds of things one can easily make a living on. I’m working on a browser-based programming environment which explores a new way of designing and organizing code. I don’t want to say too much about it because as I code the idea continues to develop in my mind, and I don’t want to nail it down yet (maybe ever). But anyway, to make money with that would sacrifice its beauty; this tool is not for productivity, at least not at first: it is exploring a way of thinking. It is easier to make a living making the music I love than the software I love. If my life is to overflow with love and happiness, music is the breadwinner. Again — only a month in. But I think this is the way to do it, for me. I’m not setting myself up for a comfortable life, but comfort is a trap anyway. It is the contrast that feels so good, and without that contrast comfort is just normal. Without discomfort to prepare the contrast, comfort is dull and boring. Anyway, that’s how I see it. Funny coming from a hedonist like me. I guess I’m having a stint of long-term hedonism at the expense of short-term. Maybe someday I won’t even feel the need to justify my choice anymore. That’s when I’ll really be in it. ### Share this: Posted in: Uncategorized | Tagged: family, music, personal, philosophy # Posts navigation ### Recent Comments • H2s on Polyamory and Respect • Luke on Polyamory and Respect • akos on Polyamory and Respect • wren ng thornton on Polyamory and Respect • Luke on Polyamory and Respect ### Twitter • RT @pigworker: Statutory "Ult" tweet. A tweet prefixed with "Ult" is a comment on the immediately preceding retweet. We all need something … 2 days ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 59, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9563725590705872, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/140854-chord-length-circle-inscribed-rectangle.html
Thread: 1. Chord length of a circle inscribed in a rectangle The picture is of a rectangle whose base is greater than its height. A circle is drawn in the rectangle, which is tangent to the top, left, and bottom edges. the diagonal from the bottom left corner to the top right corner meets the circle in points E and F. Express the distance EF in terms of the base and height. Not sure how to even start this off other than r=h/2 2. Let ABCD be the rectangle where AB is b and BC is h. Draw a line PFQ parallel to AB. PF is the diameter of the circle. PEF is a right angled triangle which is similar to BCD. Now compare equivalent sides to find the length of the chord EF in terms of h and b. 3. Originally Posted by sa-ri-ga-ma Let ABCD be the rectangle where AB is b and BC is h. Draw a line PFQ parallel to AB. PF is the diameter of the circle. ... I don't want to pick at you but in my opinion this is only correct if b = 2h. If you change the ratio of b and h then the line PF parallel to AB is not a diameter of the circle. See attachment. Attached Thumbnails 4. I tried working it out to the best of my knowledge but I don't really know if it's correct: I don't even know if that's correct or if I'm going in the right direction 5. Hello Mustard Welcome to Math Help Forum! I have an answer, based on the Tangent-Secant Theorem - and a whole lot of algebra. It is: $EF = \sqrt{\frac{2bh^3}{b^2+h^2}}$ This checks out for two special cases: When $b = h, EF = h$, which is so, because EF is then the diameter of the circle, which is of length $h$. As $b \to \infty, EF \to 0$, which is also correct. Perhaps you'd like to check my working. To keep the algebra simpler, I let: $h = 2r$ $b = r+s$ and, in the diagram you have drawn: $DE = p$ $BF = q$ $BD = t$ $EF = x$ Then the lengths of the tangents from D and B to the circle, are $r$ and $s$ respectively. So, using the Tangent-Secant Theorem: $p(p+x) = r^2$ ...(1) $q(q+x) = s^2$ ...(2) So, from (1): $p^2+px-r^2 = 0$ $\Rightarrow p = \frac{-x+\sqrt{x^2+4r^2}}{2}$, taking the positive root Similarly $\Rightarrow q = \frac{-x+\sqrt{x^2+4s^2}}{2}$ But $p+x+q = t$ Therefore $\frac{-x+\sqrt{x^2+4r^2}}{2}+x+\frac{-x+\sqrt{x^2+4s^2}}{2}=t$ $\Rightarrow\sqrt{x^2+4r^2}+\sqrt{x^2+4s^2}=2t$ $\Rightarrow x^2+4r^2+x^2+4s^2+2\sqrt{(x^2+4r^2)(x^2+4s^2)}=4t^ 2$ and, then if we re-arrange, square both sides, and re-arrange again, this simplifies to: $4t^2x^2=4(t^2-r^2-s^2)^2-16r^2s^2$ ...(3) So we now have an expression for x that we can write in terms of $b$ and $h$. Now: $t^2 = b^2+h^2$ (Pythagoras' Theorem on $\triangle DBC$) $= (r+s)^2+(2r)^2$ $= 5r^2 + 2rs+s^2$ So (3) becomes: $4(b^2+h^2)x^2 = 4(4r^2+2rs)^2 - 16r^2s^2$ $=4(16r^4+16r^3s+4r^2s^2)-16r^2s^2$ $(b^2+h^2)x^2=16r^4+16r^3s$ $\Rightarrow x^2= \frac{16r^3(r+s)}{b^2+h^2}$ $\Rightarrow x =\sqrt{\frac{2bh^3}{b^2+h^2}}$ Grandad
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267584681510925, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/79850/software-for-computing-baker-campbell-hausdorff/79911
## Software for Computing Baker-Campbell-Hausdorff ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Does anyone have a recommendation for software which can efficiently calculate the Baker-Campbell-Hausdorff series in classical Lie algebras? Right now, I have a problem which boils down to understanding Baker-Campbell-Hausdorff with respect to a basis in su(2), and this seems like the kind of thing Sage or Mathematica should be able to handle. However, I haven't had to use computer algebra packages for Lie theory before, so I would love to be pointed in the right direction. Many thanks, Jesse - 1 For $SU(2)$, rather than using BCH, why not just exponentiate the matrices, multiply them and compute the logarithm? The logarithm is the most involved step, but it basically amounts to computing the eigenvalues and eigenvectors. This is a relatively straightforward task. Do you want the answer to be in a particular form? – Ryan Budney Nov 2 2011 at 17:35 I suppose in principle there's a nice closed-form answer if you use the above approach. You can determine the axis and angle of rotation by solving some linear and a quadratic equation. – Ryan Budney Nov 2 2011 at 17:40 I've been told that LiE is a good software package for many Lie-theoretic problems. I've never used it myself, so I'll leave this as a comment, rather than an answer. – Theo Johnson-Freyd Nov 2 2011 at 18:02 @Ryan, SU(2) is a warm-up for SU(n), so while I could do things by hand here, I figured it's worth my while to find a good computer program now. Theo, thanks for the mention on LiE. I'll take a look at it. – Jesse Wolfson Nov 2 2011 at 19:13 1 arxiv.org/abs/math-ph/9905012 – Steve Huntsman Nov 3 2011 at 0:10 show 1 more comment ## 3 Answers There is a quite comprehensive package for Lie algebras in Maple. It is developed by Ian Anderson (from Utah State not Jethro Tull). - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. You don't need any packages to be able to do that in Mathematica for small Lie algebras such as su(2), probably not in SAGE either (I'm familiar with Mathematica). Anyway, Basically you just need to use the MatrixExp function and solve some equations. E.g. define $M_1 = \text{MatrixExp}\left[\sum\limits_i \alpha^i X_i\right]$ and $M_2 =\prod\limits_i\text{MatrixExp}\left[\beta^i X_i\right]$ You may get transcendental equations, but you can simplify things by hand. Here's $M_2$ explicitly (just pick a basis $X_i$): $\text{M2}=\text{MatrixExp}[X[1]\alpha [1]].\text{MatrixExp}[X[2]\alpha [2]].\text{MatrixExp}[X[3]\alpha [3]]\text{//}\text{FullSimplify}$ - umm and I mean of course then solve the equation $M_1 = M_2$ for $\alpha$'s or $\beta$'s... – H. Arponen Nov 3 2011 at 9:32 I don't have a first-hand experience, but hope this is helpful anyway. K.Engo, A.Marthinsen and H.Munthe-Kaas have done a lot of work on numerical methods for solving ODE on manifolds (and Lie groups in particular). See for example their paper ''DiffMan: an object-oriented Matlab toolbox for solving differential equations on manifolds'', Appl. Numerical Mathematics, 39 (2001), p.323 where they discuss a particular package. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922107458114624, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/60341?sort=oldest
## applications of the sphere theorem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am looking for interesting applications of the 1/4-pinched sphere theorem. The theorem says: A compact, simply connected riemannian manifold whose sectional curvature K satisfies $1/4 < K \leq$ 1 (possibly after multiplying the metric by a constant) is homeomorphic (recently extended to "diffeomorphic") to the sphere. I just wanted to know: is it just a beautiful theorem or can you use it in concrete situations to derive some conclusions difficult to see otherwise? I am interested in this just because I am curious, I do not have any specific purpose in mind. - 11 Well, it is a beautiful and natural theorem... Pleasure should be counted as an application! – Mariano Suárez-Alvarez Apr 2 2011 at 3:57 3 One the top of this web page, there is a link labeled "how to ask". Please read the page that is linked there, and revise your question. – S. Carnahan♦ Apr 2 2011 at 4:17 3 When you have finished, please flag for moderator attention, so the question can be reopened. – S. Carnahan♦ Apr 2 2011 at 4:18 8 I personally think that this is not such a bad question that it needed to be insta-closed. I'd be interested in hearing answers! – Andy Putman Apr 2 2011 at 14:59 3 @whatever, read Comparison Theorems in Riemannian Geometry, by Jeff Cheeger and David Ebin, undergraduate/beginning graduate level. There are more recent books with similar material as well. This will satisfy any curiosity you might have about the place of the two Sphere Theorems, homeomorphic and diffeomorphic, as to difficulty and place in mathematics. Chapter 7 discusses alternate differentiable structures. Note that Calabi and Gromoll proved the Differentiable version in 1966, but needed a pinchng constant depending on dimension.. Finally, actually read Brendle and Schoen. – Will Jagy Apr 2 2011 at 19:54 show 15 more comments ## 2 Answers The main theme of global Riemannian geometry is to derive topological conclusions from geometric assumptions. Sphere theorems provide various assumptions under which a manifold is (homeomorphic, diffeomorphic, or almost isometric) to a sphere. The significance of sphere theorems is not in their applications or implications but in the beautiful mathematics they generated. Tools developed to prove various sphere theorems is a backbone of modern comparison geometry, and a great place to learn about it is the survey by Abresch and Meyer. More recently Brendle-Schoen used Ricci flow to prove a definitive differentible sphere theorem; this and closely related work by Bohm-Wilking are (in my view) the most spectacular applications of Ricci flow beyond dimension three. - 1 Very nice answer. – Deane Yang Apr 3 2011 at 14:20 Thanks guys, those were really very helpful. I am particularly reading the survey by Abresch and Meyer now. I saw someone comment on the meta thread about the application of the sphere theorem to the aphericity of knot complements....can anyone tell me a reference for this? – whatever Apr 7 2011 at 0:39 2 @whatever: I doubt asphericity of knowt complements is related to this sphere theorem. It might be related to another one: en.wikipedia.org/wiki/… – Igor Belegradek Apr 7 2011 at 3:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. An application occurs in the study of asymptotic behavior of complete manifolds with certain curvature decay. Let M be a n-dimensional complete non-compact manifold. Suppose that 1. M is simply-connected at infinity, 2. the sectional curvatures of M go to zero at infinity, 3. there exists a foliation of (n-1)-dimensional sub-manifolds on the ends of M 4. these sub-manifolds have controlled second fundamental form, then you may use Gauss equation and the differential sphere theorem to say that these sub-manifolds are diffeomorphic to the sphere. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9294918179512024, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/56078?sort=newest
## Geometric interpretation of $BN$-pairs ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) My question is relative to a geometric interpretation of the $BN$-pairs that arise in Tits' theory of buildings. Here is a definition that comes from an article by G. Stroth (Nonspherical spheres). $[\ldots]$ Let $\mathcal{P} = {P_1, \ldots, P_n}$ be a minimal parabolic system for a group $G$, $B=P_1 \cap \ldots \cap P_n$ the Borel subgroup. A subgroup $N$ of $G$ is called a Weyl group for $\mathcal{P}$ iff 1) $N= \langle x_1, \ldots, x_n \rangle, x_i \in P_i-B, x_i^2 \in B$. 2) $B \cap N$ is a normal subgroup of $N$. 3) $N \cap P_i = (B \cap N) \langle x_i \rangle , i = 1, \ldots, n$. $[\ldots]$ If additionally we have (A) $G = BNB$ and (B) $BgBhB \subset (BgB) \cup (BghB)$ for all $g,h \in N$ then we have a $BN$-pair. Geometrically, the Weyl group $N$ is the stabilizer of an apartment $\Delta$ of the geometry $\Gamma$ defined from $\mathcal{P}$, and $B$ is the stabilizer of a chamber of $\Delta$. I am trying to get a clear geometric view of these objects. Here is my question. What could be a geometric interpretation of condition (B)? - I think you must have copied down (B) wrong, since none of the things that are supposed to be examples satisy it. Probably you want $h$ to be one of the $x_i$'s. – Ben Webster♦ Feb 20 2011 at 17:11 @Ben Webster: I checked again, and I made no mistake in copying it down. – Thomas Connor Feb 20 2011 at 17:31 Fair enough. That definition is stronger than the usual definition of B,N-pair (en.wikipedia.org/wiki/(B,_N)_pair) and rules out many interesting examples. – Ben Webster♦ Feb 20 2011 at 17:46 That condition (B), whether copied accurately or not, is troubling. For example, in GL(3,k) over a field k, the largest (spherical) Bruhat cell $Bw_oB$ (with longest Weyl element $w_o$) is such that $Bw_oB\cdot Bw_oB=GL(3,k)$. That is, all 3! of the Bruhat cells are hit. The condition (B) above would require that $Bw_oB\cdot Bw_oB=B\cup Bw_oB$. As Ben W. noted above, it seems likely that $h$ in (B) should be among the $x_i$'s. – paul garrett Jun 27 2011 at 20:46 ## 2 Answers Note: I'm using the general definition of BN-pair, which is weaker than the condition (B) given above. It's a triangle inequality. One way to think about BN-pair is that they give a sort of combinatorial distance function on $G/B$. Given two cosets $g_1B$ and $g_2B$, you look at the product $Bg_1^{-1}g_2 B\in B\backslash G/B\cong N/(N\cap B):= W$ and think of that as the "distance" between them. To get a more numberish distance, you can let the length of an element of $W$ be the length of the shortest product of $x_i$'s which gives it. If I take two cosets $BgB$ and $BhB$, and expand those as $Bx_{i_1}\cdots x_{i_n}B$ and $Bx_{j_1}\cdots x_{j_m}B$, then $$BgB\cdot BhB\subset Bx_{i_1}\cdots x_{i_n}B\cdot Bx_{j_1}\cdots x_{j_m}B\subset Bx_{i_1}B\cdots Bx_{i_n}Bx_{j_1}B\cdots Bx_{j_m}B.$$ Applying (B) inductively, we see that the last term is in the union of certain double cosets which have length shorter than the sum of that of $BgB$ and $BhB$. To apply this to the "distance function," note that $Bg_1^{-1}g_3B\subset Bg_1^{-1}g_2B\cdot Bg_2^{-1}g_3B$, so the length of the distance between $g_1$ and $g_3$ is less than the sum of the lengths for $g_1$ to $g_2$ and $g_2$ to $g_3$: the triangle inequality. Of course, that just shows that (B) implies the triangle inequality, but it's easy to see that it's also a special case. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As a further commentary beyond what Ben has said, I'd emphasize that Stroth (along with Ronan and other finite group theorists) has modified some of the BN-pair and building formalism introduced by Tits. My understanding is that this is done partly in an attempt to unify the study of sporadic simple groups and simple groups of Lie type within the kind of "geometric" setting formulated first for the latter groups. In any case, the broader notion of "parabolic system" in a group as used here is motivated by the earlier Lie structure but requires some experimentation with additional axioms beyond what Tits did. As Ben points out, the condition (B) you quote from that 1990 Durham conference article by Stroth goes beyond the conventional BN-pair axiom. In that conventional setting, which is close to the geometry of buildings and apartments, the Weyl group and its length function play a vital role in talking about distances in the geometry, etc. This is partly encoded in the usual version of condition (B). Whether or not the structures studied by Stroth really add "geometric" flavor to the finite groups of interest is more than I can judge, but this does get outside the conventional framework of buildings with finite Weyl groups. By the way, it can do no harm to digest some of the original Tits thinking about the subject formulated as a detailed series of exercises for Section 2 of Chapter IV in the 1968 Bourbaki Chapters IV-VI of Groupes et algebres de Lie (later published in English translation by Springer). Naturally much of this shows up in his own Springer lecture notes on the "spherical" case as well as in later books on buildings. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503733515739441, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/24139-work-pump-empty-tank-full-work-lift-full-tank.html
# Thread: 1. ## Work to pump an empty tank full = Work to lift the full tank The center of a spherical tank of radius R is at a distance H > R above the ground. A liquid of weight density p is at ground level. Show that the work required to pump the initially empty tank full of this liquid is the same as that to lift the full tank the distance H (ignoring the weight of the tank itself). Thank you for any hints and suggestions! 2. ## Work = change in energy hasanbalkan, my first response is that gravity is a conservative force, meaning we can define a gravitational potential energy U such that the gravitational force $\mathbf{F} = -\nabla U$ (force is the opposite of the gradient), and the work done in moving an object or system from one position to another is equal to the change in potential energy of the object or system, and is independent of path (the fundamental theorem of line integrals). Thus as the two paths start with states of the same potential energy (for a massless tank), and end at the same state, the change in potential energy, and thus work, is the same for both paths. More explicitly, you can use the fact that $\mathbf{F} = -mg \hat k$, where $\hat k$ is the unit vector in the positive z direction, which we make the upward direction. To lift the liquid, the force we apply (from which we compute the work) is the opposite of this. Thus, when a bit of liquid of volume dV, and thus mass $dM = \rho dV$is raised (from the ground z=0) by a height h, the work done is $\int \mathbf{F}\cdot\, \mathbf{ds} = \int_{0}^{h}\rho dV g\, dz$ $=\rho dV g h$ Thus the work to fill the tank at height H is $\iiint\limits_{S'} \rho g z\, dx\,dy\,dz = \rho g \iiint\limits_{S'} z\, dx\,dy\,dz$ where S' is a sphere of radius R with center at a height z=R+H, in other words, the region of the tank's interior at that height. Similarly, the work to fill the tank on the ground is $\iiint\limits_{S} \rho g z \,dx\,dy\,dz = \rho g \iiint\limits_{S} z \, dx\,dy\,dz$ where S' is a sphere of radius R with center at a height z=R, in other words, the region of the tank's interior when on the ground. And the work to lift the tank from ground to a height H is simply $MgH$, where $M = \rho V = \tfrac{4}{3}\pi R^3 \rho$ is the total mass of liquid in the tank. You can thus perform the integrals (converting to cylindrical coordinates will help) and show $\rho g \iiint\limits_{S'} z\, dx\,dy\,dz = \rho g \iiint\limits_{S} z\, dx\,dy\,dz + \tfrac{4}{3}\pi R^3 \rho g H$ --Kevin C.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940605103969574, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/111440/examples-of-apparent-patterns-that-eventually-fail/111452
# Examples of apparent patterns that eventually fail Often, when I try to describe mathematics to the layman, I find myself struggling to convince them of the importance and consequence of 'proof'. I receive responses like: "surely if the Collatz Conjecture is true up to $20 \times 2^{58}$, then it must always be true?'; and "the sequence of number of edges on a complete graph starts $0,1,3,6,10$, so the next term must be $15$ etc". Granted, this second statement is less logically unsound than the first since it's not difficult to see the reason why the sequence must continue as such; nevertheless, the statement was made on a premise that boils down to "interesting patterns must always continue". I try to counter this logic by creating a ridiculous argument like "the numbers $1,2,3,4,5$ are less than $100$, so surely all numbers are", but this usually fails to be convincing. So, are there any examples of non-trivial patterns that appear to be true for a large number of small cases, but then fail for some larger case? A good answer to this question should: 1. be one which could be explained to the layman without having to subject them to a 24 lecture course of background material, and 2. have as a minimal counterexample a case which cannot (feasibly) be checked without the use of a computer. I believe conditions 1. and 2. make my question specific enough to have in some sense a "right" (or at least a "not wrong") answer; but I'd be happy to clarify if this is not the case. I suppose I'm expecting an answer to come from number theory, but can see that areas like graph theory, combinatorics more generally and set theory could potentially offer suitable answers. - 38 The sentence: ""the numbers 1,2,3,4,5 are less than 100, so surely all numbers are" - Is interesting. – Emmad Kareem Feb 20 '12 at 22:01 6 – Gerry Myerson Feb 20 '12 at 22:26 6 This doesn't satisfy b), but how about "$n^2-n+41$ is always prime"? (it's true for $1\le n\le 40$). – David Mitra Feb 20 '12 at 22:39 6 – deinst Feb 21 '12 at 3:45 10 @EmmadKareem After reading halfway through this page, this looks like a challenge to see who can give the most mind blowing example of this simplified version: "N not equals 82174583229565384923 for N = 1,2,3,4... breaks down at N = 82174583229565384923" – Jake Feb 22 '12 at 13:27 show 11 more comments ## 23 Answers I'll hereby translate an entry in the blog Gaussianos ("Gaussians") about Polya's conjecture, titled: ## A BELIEF IS NOT A PROOF. We'll say a number is of even kind if in its prime factorization, an even amount of primes appear. For example $6 = 2\cdot 3$ is a number of even kind. And we'll say a number is of odd kind if the number of primes in its factorization is odd. For example, $18 = 2·3·3$ is of odd kind. ($1$ is considered of even kind). Let $n$ be any natural number. We'll consider the following numbers: 1. $E(n) =$ number of positive integers less or equal to $n$ that are of even kind. 2. $O(n) =$ number of positive integers less or equal to $n$ that are of odd kind. Let's consider $n=7$. In this case $O(7) = 4$ (number 2, 3, 5 and 7 itself) and $E(7) = 3$ ( 1, 4 and 6). So $O(7) >E(7)$. For $n = 6$: $O(6) = 3$ and $E(6) = 3$. Thus $O(6) = E(6)$. In 1919 George Polya proposed the following result, know as Polya's Conjecture: For all $n > 2$, $O(n)$ is greater than or equal to $E(n)$. Polya had checked this for $n < 1500$. In the following years this was tested up to $n=1000000$, which is a reason why the conjecture might be thought to be true. But that is wrong. In 1962, Lehman found an explicit counterexample: for $n = 906180359$, we have $O(n) = E(n) – 1$, so: $$O(906180359) < E(906180359).$$ By an exhaustive search, the smallest counterexample is $n = 906150257$, found by Tanaka in 1980. Thus Polya's Conjecture is false. What do we learn from this? Well, it is simple: unfortunately in mathematics we cannot trust intuition or what happens for a finite number of cases, no matter how large the number is. Until the result is proved for the general case, we have no certainty that it is true. - 86 No matter how smart I feel like I'm getting, when I want to be humbled, I simply type `math.stackexchange.com` into my URL bar. – orokusaki Feb 21 '12 at 1:23 2 Note that “y por tanto” is Spanish for “and therefore”. – tchrist Feb 21 '12 at 1:28 15 In 1942, Ingham showed that Polya's conjecture implies the Riemann hypothesis and that the positive imaginary parts of the nontrivial zeros of the Riemann zeta-function are linearly dependent over $\mathbf Q$. The second conclusion is very suspicious, so in principle this should have cast doubt on Polya's conjecture (I don't know if it really did). And in 1958 the conjecture was disproved by Haselgrove without a specific counterexample being found, much like Matt's answer about the size switch between $\pi(x)$ and ${\rm Li}(x)$. – KCd Feb 21 '12 at 1:42 2 It doesn't hold because E() cheated by getting 1 arbitrarily thrown into the set. In fact, if 1 is considered odd, the inequality can be made stronger by eliminating the words "or equal" from the conjecture ;-) – phkahler Feb 21 '12 at 15:20 1 @phkahler if you go by the fact that $1$ is not a prime then $1$ is of even kind because it has $0$ prime factors (and last I checked $0$ is even) – ratchet freak Feb 22 '12 at 0:56 show 11 more comments From "Experimentation in Mathematics" Borwein, Bailey and Girgensohn 2004 : $$\sum_{n=1}^{\infty} \lfloor n\cdot e^{\frac{\pi}3\sqrt{163}}\rfloor 2^{-n}=1280640\ \ \text{(correct to at least half a billion digits!)}$$ Using the $\mathrm{sinc}$ function ($\mathrm{sinc}(x)=\frac{\sin(x)}x$ and this paper) : $$\int_0^{\infty} \mathrm{sinc}\left(\frac x1\right) dx=\frac{\pi}2$$ $$\int_0^{\infty} \mathrm{sinc}\left(\frac x1\right)\cdot \mathrm{sinc}\left(\frac x3\right)dx=\frac{\pi}2$$ $$\int_0^{\infty} \mathrm{sinc}\left(\frac x1\right)\cdot \mathrm{sinc}\left(\frac x3\right)\cdot \mathrm{sinc}\left(\frac x5\right)dx=\frac{\pi}2$$ $$\cdots$$ $$\int_0^{\infty} \mathrm{sinc}\left(\frac x1\right)\cdot \mathrm{sinc}\left(\frac x3\right)\cdot \mathrm{sinc}\left(\frac x5\right)\cdots \mathrm{sinc}\left(\frac x{13}\right)dx=\frac{\pi}2$$ $$\int_0^{\infty} \mathrm{sinc}\left(\frac x1\right)\cdot \mathrm{sinc}\left(\frac x3\right)\cdots \mathrm{sinc}\left(\frac x{15}\right)dx=\frac{467807924713440738696537864469}{ 935615849440640907310521750000}\pi$$ In fact the story doesn't end here! It was found (see Baillie and Borweins' "Surprising Sinc Sums and Integrals") that you could replace the integrals by the corresponding $\frac 12 + \sum_1^{\infty}$ series : $$\frac 12 + \sum_{m=1}^{\infty} \prod_{k=0}^N \mathrm{sinc}\left(\frac m{2k+1}\right)=\int_0^{\infty} \prod_{k=0}^{N} \mathrm{sinc}\left(\frac x{2k+1}\right)\ dx.$$ for the previous values of ($N=0,1,2,3\cdots 7$) but also for larger values of $N$ up to $40248$. For $N\gt 40248$ the left part is always larger than the integral at the right! At this point the reciprocals of the odd integers could be replaced by other values (see the paper for the conditions required for the equality to hold) for example by the reciprocals of the prime numbers. Now, because of the slow divergence in this case, the equality breaks down only for $N \approx 10^{176}$ (when the sum of values slowly crosses the $2\pi$ barrier) and with an error smaller than $\displaystyle 10^{-10^{86}}$. - 15 That is just uproariously funny! – ncmathsadist Feb 21 '12 at 0:27 22 The 'trick' in the sinc integrals, incidentally, is that the last one is the first for which the sum of the arguments (at x=1) exceeds 2: 1/1+1/3+1/5+...+1/13 = 1.9551..., but add 1/15 and the sum becomes 2.0218... - the behavior is, roughly, tied to that sum. – Steven Stadnicki Feb 21 '12 at 1:20 4 The $\sqrt{163}$ sum leaves me cold but the sinc integrals are just great. I learned something today, thanks. – Did Feb 21 '12 at 15:34 8 – Bruno Feb 21 '12 at 16:51 6 – Byron Schmuland Feb 21 '12 at 21:47 show 7 more comments The seminal paper on this is Richarg Guy's The Strong Law of Small Numbers Proclaiming "there aren't enough small numbers to meet the many demands made of them," it lists $35$ patterns that don't pan out. Others have expanded on the 'law of small numbers' Such as here (and a few more links on that page) A particularly great example from the second link: • $\gcd(n^{17}+9, (n+1)^{17}+9)$ seems to always be one. In fact, if you had your computer checking this for $n=1, 2, 3, \dots$ successively, it would never find a counter-example. That is because the first counter-example is $$8424432925592889329288197322308900672459420460792433\;.$$ - Choose n points around the circumference of a circle, and join every point to every other with a line segment. Assuming that no three of the line segments concur, how many regions does this divide the circle into? There's a rather obvious pattern, that breaks down at n=6. - 4 For reference, this is OEIS A000127, discussed on the Math Forum. – Rahul Narain Feb 21 '12 at 2:45 I mention this often when talking about "what's the next number in this sequence?" - type problems. – The Chaz 2.0 Feb 21 '12 at 3:29 2 This was exactly the example that I was going to give, +1 from me – Francesco Feb 21 '12 at 7:36 14 – Charles Morisset Feb 21 '12 at 9:07 2 You can also tell people that the tenth element of this sequence is 256, and they think "aha! the pattern continues!" – Michael Lugo Mar 6 '12 at 22:24 I am kind of partial to the old $n^2 + n + 41$ chestnut, namely that the expression is prime for all $n$. It fools an awful lot of people. - 12 The chestnut being what exactly? – Dason Feb 21 '12 at 3:07 4 It appears to be a prime number generating polynomial. Appears... (edit) It also appears that @Seth and I saw this at the same time! – The Chaz 2.0 Feb 21 '12 at 3:30 10 This doesn't really satisfy condition (2.) of the original question. No sane person needs a computer to check the $n=41$ counterexample. – Tib Feb 21 '12 at 17:27 5 @Tib: It is easy to check if you push them in the right direction. $41^2 + 41 + 41$ is clearly divisible by 41. – Mike Boers Feb 23 '12 at 20:45 9 $40^2+40+41$ is also clearly divisible by 41. – MJD Jun 2 '12 at 23:24 show 4 more comments Take from Joseph Rotman's "A First Course in Algebra: with applications": The smallest value of $n$ for which the function $f(n) = 991n^2 + 1$ is a perfect square is $$n = \mbox{12,055,735,790,331,359,447,442,538,767}.$$ (This number is approximately $50$ times larger than the square of the United States' national debt!) On a similar note, the smallest value of $n$ such that the function $g(n) = 1,000,099n^2 + 1$ is a perfect square has $1116$ digits. - What is the pattern here? The f of (the first 1.2x10^28 integers) is not a perfect square?? – The Chaz 2.0 Feb 21 '12 at 3:32 3 @Chaz: Correct. The values of $f(n)$ for $1 \leq n \leq 12,055,735,790,331,359,447,442,538,766$ are all non-squares. – JavaMan Feb 21 '12 at 3:41 3 That doesn't feel like much of a pattern. It's more of the absence of one! – fluteflute Feb 23 '12 at 9:45 5 @fluteflute: After checking the first $12,000,000,000,000,000,000,000,000,000$ values you wouldn't guess that ALL values are nonsquares? – JavaMan Feb 23 '12 at 12:45 2 Well, no. There is no intuition of a reason why ALL values would be nonsquares. – wok May 4 '12 at 11:18 show 1 more comment Claim: The cyclotomic polynomials $\phi_n(x)$ have coefficients in the set $$\{ -1, 0, 1 \}$$ It holds for any number that doesn't have at least $3$ distinct odd prime factors, which means the smallest counterexample is $3 \cdot 5 \cdot 7 = 105$. So a naive undergrad probably won't ever see a counterexample unless he is specifically shown $\phi_{105}$. - Yea, I am not sure I have ever seen one. I think Dummit and Foote mentions that they are not always in -1, 0, or 1 but doesn't say anything else about it, unless I am mistaken. – Graphth Feb 21 '12 at 23:39 Perhaps a little technical, but I think you can give the flavour without the details. It was long believed that the logarithmic integral $\operatorname{Li}(x)$ is greater than the prime counting function $\pi(x)$ for all $x$, and computations verified this for a lot of "small" (but by most people's standards fairly large) $x$. It was proved to be false in 1914 by J.E. Littlewood, who did not find a counterexample explicitly, but showed that one must exist - it is believed to be around $10^{316}$, way outside the range of computations at the time. So this example isn't great, because the logarithmic integral is fairly technical, but the specifics of $\operatorname{Li}(x)$ aren't that important, so it's just about one function being bigger than another. More details on Wikipedia. - Heather360 gives the following amusing example: US presidents elected in 1840, 1860, 1880, 1900, 1920, 1940, and 1960 all died in office, but Ronald Reagan did not. But the following example is probably more along the lines of what you had in mind. The pattern is not very long, but it is very simple and could be explained to anyone of any background: http://threesixty360.wordpress.com/2008/10/26/one-two-three-four-six-again-and-then-again/ Also, since you started your question without reference to patterns, per se, but to the importance of mathematical proof, I would point to the Banach-Tarski paradox. I think most people, especially non-mathematicians, have trouble believing this result, so it is certainly an example of mathematical proof establishing a counter-intuitive result. - 1 +1, very interesting. – Emmad Kareem Feb 20 '12 at 23:34 @Emmad Your's too. I wouldn't have though of citing a function with a singularity, but why not? That is just as good as any other example! – William DeMeo Feb 21 '12 at 0:04 4 That (Banach-Tarski) is not an example you should use to make people believe in mathematics ;-) – phkahler Feb 21 '12 at 15:40 Well if we're going to go with examples outside of math, how about the famous claim that the winner of the superbowl (whether home or away) will determine whether the stock market goes up or down? Sounds ridiculous, but it held true for something like 15 years straight, then the majority of the next 10. – BlueRaja - Danny Pflughoeft Feb 21 '12 at 19:32 The "chinese remainder" prime-test : $\qquad \small \text{ if } 2^n-1 \equiv 1 \pmod n \qquad \text{ then } n \in \mathbb P$ fails first time at n=341 . That was one of the things that really made me thinking when I began hobbying with number-theory in a more serious way... - I like to point to the many tuples of numbers that are part of multiple sequences at OEIS.org. I just typed in 1, 1, 2, 3, 5 and got 751 results. - 2 Even with `1,1,2,3,5,8,13,21,34,55,89` you still get 26 results. While many contain the word "Fibonacci" in the description, there's also "Expansion of 1/(1 - x - x^2 + x^18 - x^20)." – celtschk Sep 15 '12 at 21:41 1 You can actually get up to 10946 and still have a result that's not directly related to the Fibonacci sequence. – Joe Z. Mar 21 at 20:37 Does Goodstein's Theorem fit the bill? (By the way, here is a nice applet.) The question was: So, are there any examples of non-trivial patterns that appear to be true for a large number of small cases, but then fail for some larger case? A good answer to this question should: $(1)$ be one which could be explained to the layman without having to subject them to a $24$ lecture course of background material, and $(2)$ have as a minimal counterexample a case which cannot (feasibly) be checked without the use of a computer. Requirement $(1)$ is obviously satisfied. Is requirement $(2)$ is satisfied? In some sense it is not, because a computer wouldn't help. But, in another sense, it is over satisfied, because, even with the most powerful imaginable computer, the statement cannot be checked. That is, it cannot be checked by any calculation, although it only involves addition, multiplication and exponentiation of positive integers. But with a very simple notion (that of ordinal), it becomes almost trivial. To say it in another way: It is very easy to prove that the apparent pattern will break eventually, but the argument doesn't give the slightest clue about when it will break. So, Goodstein's Theorem is, I think, a quite instructive piece of mathematics. - A particular instance, such as the claim "the Goodstein sequence $G(4)$ does not terminate" would satisfy (2) as well as (1). – Trevor Wilson Sep 12 '12 at 22:02 Fermat numbers would be a good example. The numbers $F_n = 2^{2^n}+1$ are prime for $n=1,2,3,4$, however $F_5 = 4,294,967,297 = 641 × 6,700,417$ is not prime. In fact, there are no known Fermat primes $F_n$ with $n > 4$. Admittedly this isn't impossible to check by hand, but the rapid increase in $F_n$ makes it factoring such numbers by hand highly impractical. I can't imagine any layman who would be comfortable trying to factor even a 10 digit number. In the case of $F_5$, trying to check for prime factors by brute force you would have to check 115 primes before you get to 641. - When telling this story, it's highly effective to mention that Fermat himself was fooled by this, having apparently not bothered to check $n=5$! Also, it drives the point home to mention that we're now in the opposite situation, where one is tempted to conjecture that $F_n$ is prime iff $n=1,2,3,$ or $4\ldots$ – Douglas B. Staple Apr 10 at 2:33 This might be a simple example. If we inscribe a circle of radius 1 in a square of side 2, the ratio of the area of the circle to the square is $\frac{\pi}{4}$. You can show that any time we put a square number of circles into this square, the ratio of the area of the circles to that of the square is (for the simple symmetric arrangement) again $\frac{\pi}{4}$. So for 1, 4, 9, 16 circles, this packing is the best we can do. I had mistakenly assumed, based on this "obvious" pattern, that the limit of optimal packings of circles into the square did not converge, but rather continued to drop down to this same ratio every time a square number was reached. This turns out not to be true, as I learned here. There are many other examples, but this served as a reminder for me. - Can a circle be cut up into a finite number of parts and rearranged to form a square? Laczkovich proved in 1990 that this can be done with about $10^{50}$ pieces. A good source for this kind of thing is "Old and new unsolved problems in plane geometry and number theory," by Klee and Wagon. The advantage is that none of the problems use more than arithmetic and geometry, so the examples are accessible to people who aren't mathematicians. - 8 I'm not sure the dissection is a great example; there's no 'easy' way of showing that it doesn't happen for smaller counts of pieces, and even for the large piece-counts they're not 'parts' in the ways that people would really expect (in particular, their boundaries aren't proper curves). – Steven Stadnicki Feb 21 '12 at 1:24 Let $$\pi^{(4)}_1(N) = \text{ Number of primes }\leq N\text{ that are of the form } 1 \bmod 4$$ and $$\pi^{(4)}_3(N) = \text{ Number of primes }\leq N\text{ that are of the form } 3 \bmod 4$$ $$\begin{array}{ccc} N & \pi^{(4)}_1(N) & \pi^{(4)}_3(N) \\ 100 & 11 & 13\\ 200 & 21 & 24\\ 300 & 29 & 32\\ 400 & 37 & 40\\ 500 & 44 & 50 \end{array}$$ Looking at the pattern, one can wonder if $\pi^{(4)}_1(N) \leq \pi^{(4)}_3(N)$ is true for all $N$. In fact, this remains true for $N$ up-to $26,860$. $26,861$ is a prime $\equiv 1 \bmod 4$ and we find that $\pi^{(4)}_1(26,861) = \pi^{(4)}_3(26,861) + 1 > \pi^{(4)}_3(26,861)$. You can read more about this and similar questions on primes here. - Here is a true story which might be entertaining, if not strictly following your conditions. We were working on an algorithm for solving problem X. As is quite usual with algorithms, there is some parameter $n$ measuring the complexity of the input. Our algorithm depended on a set of parameters for each $n$. We were able to find suitable parameters for each $n$. Then we tried to generalize the algorithm to problem Y, using the same parameters derived for problem X. We worked hard on proving that this approach works. My coauthor proved the cases $n=2,3,4,5$ by hand, each progressively more difficult. The computer (with my help) was able to find a proof for $n = 6$. When asked about $n = 7$, the computer thought for a while and then announced that it couldn't find a proof because for $n = 7$ our approach fails! Not only were our hearts broken (we stopped working on the problem for a few months), but we were quite at a loss to figure out what goes wrong at $n = 7$, and how to fix it. When algorithms fail, the minimal counterexample is usually small and there is hope of getting around the problem. Not so in this case. Fortunately, later on we were able to find another set of parameters for problem Y which did work for $n = 7$. This time we held our breath until the computer verified all cases up to $n = 50$, though we were not in peace with ourselves until we proved that our new parameters work for all $n$. - Euler's sum of powers conjecture, proposed in 1769, is a generalization of Fermat's Last Theorem about the following Diophantine equation $$\sum_{i=1}^n X_i^k=Y^k\textrm{, where }n\neq 1$$ It states that for the equation to have any solutions in positive integers, $n$ must be at least $k$ (FLT is the statement that $n\ge 2$ if $k\ge 2$). For small values of $X_i,Y$, the conjecture appears to be true. In 1966, L. J. Lander and T. R. Parkin found a counterexample for the $k=5$ case: $$25^5+84^5+110^5+133^5=144^5.$$ In 1986, Noam Elkies found an infinite family of solutions to $X^4+Y^4+Z^4=W^4$ - another counterexample. In 1988, Roger Frye used a computer and Elkies's method to find the smallest such counterexample to the $k=4$ case: $$95800^4+217519^4+414560^4=422481^4.$$ This is the only solution where $W,X,Y$ and $Z$ are less than $1,000,000$. - This recent question on math.SE provides an example, although the apparent pattern is fairly short. Consider two unit spheres in $n$ dimensions whose centers are $1$ unit apart. What is the fraction $\phi_n$ of the area of one sphere that lies inside the other? As it turns out, the answer is quite nice for small $n$, but quickly breaks down: $$\begin{align} \phi_1 &= \frac12, \\ \phi_2 &= \frac13, \\ \phi_3 &= \frac14, \\ \phi_4 &= \frac13-\frac{\sqrt3}{4\pi} \approx 0.195501\!\ldots, \\ \phi_5 &= \frac5{32}, \\ &\vdots \end{align}$$ The general formula is $$\phi_n = \frac{\int_{\pi/6}^{\pi/2}\cos^{n-2}\theta\,\mathrm d\theta}{\int_{-\pi/2}^{\pi/2}\cos^{n-2}\theta\,\mathrm d\theta} = \frac12 I_{3/4}\left(\frac{n-1}2,\frac12\right)$$ (thanks @joriki and Wikipedia), where $I_x(a,b)$ is the regularized incomplete beta function. - This is why any geometry in more than four dimensions is completely mind-fizzling. – Joe Z. Mar 12 at 19:57 Here is an example relating to a Diophantine equation. Consider positive integer solutions of $a^3 + b^3 + c^3 = d^3$. The first few primitive solutions all contain 2 odd and 2 even integers, i.e. (3,4,5,6), (1,6,8,9), (3,10,18,19), (7,14,17,20), (4,17,22,25) and (18,19,21,28). But then the pattern breaks down with (11,15,27,29). A list of the small solutions is at http://mathworld.wolfram.com/DiophantineEquation3rdPowers.html - This is a bit complicated for laymen, but it's great for aspiring number theorists. We have two conjectures: (1) The prime $k$-tuples conjecture: every admissible sequence occurs infinitely often. This is a generalization of the twin prime conjecture, which corresponds to the $k=2$ case. The $k=3$ case is that there are infinitely many $p \in \mathbb{N}$ such that $p$, $p+2$, and $p+6$ are all prime. (2) The Hardy-Littlewood convexity conjecture: $\pi(x+y)\leq \pi(x)+\pi(y)\ \forall\ x,y\geq 2$, where $\pi(x)$ is the prime counting function. This conjecture claims that the primes are densest for small $x$. No counterexamples are known for either (1) or (2). In isolation, both (1) and (2) seem reasonable. However, it turns out that (1) and (2) are mutually exclusive, which you might imagine if you stare at them both long enough with a glass of whisky. This example comes from Crandall and Pomerance, pp. 20-21: "... the current thinking is that the Hardy-Littlewood convexity [conjecture] is false ... but it also may be that any value of $x$ required to demolish the convexity conjecture is enormous." For laymen, what you can say is that there are conjectures for which no counterexamples are known, even after checking many billions of cases with computers, but which are nevertheless known to be false. Another such example is the $\pi(x) < \operatorname{Li}(x)$ false conjecture in Matt Pressland's answer. - Here is a short sequence: 1, 2, 3, 4, 5, 6 What is next term ? Next term is 1000, obviously. $a(n)= n + ((n-1)(n-2)(n-3)(n-4)(n-5)(n-6)(p-7))/6!$ Choose p = 1000 and you´ll get seventh term a(7)= p = 1000, obviously. Choose p = 7 and you´ll get 7, also obviously. We get $a(7) = p$ and so a(7) is always whatever you want; but $a(8) = 7p -41 ; a(9) = 56p -383$ are deteremined by p. Naturally you can easily extend the formula to whatever 1,2,3,4,5,6,7,8,9,1000111,... as an also obvious example. I found this formula in the book "Planetas" by the Spaniard astrophysicist Eduardo Battaner. I recommend reading his great book: "Física de las noches estrelladas", full of equations but the better divulgative astrophysics (and in general) book i have ever read, and i read a few. Great book to learn how to divulgate ¿"difficult"? problems to amateurs and newcomers in general who could not formally learn it at school/university. After reading it you´ll have a much better idea of what the Universe is without being messed with the abundant (bad) literature. I do not think, though, there is a translation of the 280 pages book into English or French or German. See : http://lit-et-raire.blogspot.com.es/2013/02/una-sucesion-muy-natural-y-tu-medida.html - No, $a(7)=7,$ but $a(8)=14$ and $a(9)=37$ – Ross Millikan Feb 16 at 4:48 – user55514 Feb 17 at 9:46 Here is one example that is incredibly simple, requires no Greek or variables to explain to a layman, and really is wonderfully ridiculous: Take the series $1-2+3-4...$ Here are the first few partial sums: $1 = 1$ $1-2= -1$ $1-2+3= 2$ $1-2+3-4= -2$ $1-2+3-4+5= 3$ $1-2+3-4+5-6= -3$ Obviously this series somehow ends at $\infty$ or $-\infty$, or really probably diverges and is neither...right? But here's the kicker - the whole thing ends up at $\dfrac{1}{4}$. That is, $$1-2+3-4...=\dfrac{1}{4}$$ So we were summing integers, and we were somehow trending towards $\pm\infty$, and then we ended up with a number that is neither an integer nor infinite. How? The proof is simple: If $s$ is the sum, solve for $4s$: $4s = (1-2+3-4\cdots) + (1-2+3-4\cdots) + (1-2+3-4\cdots) + (1-2+3-4\cdots)$ $4s = (1-2+3-4\cdots) + 1+(-2+3-4+5\cdots) + 1+(-2+3-4+5\cdots) + (1-2)+(3-4+5-6\cdots)$ $4s = (1-2+3-4\cdots) + 1+(-2+3-4+5\cdots) + 1+(-2+3-4+5\cdots) -1+(3-4+5-6\cdots)$ $4s = 1 + (1-2+3-4\cdots) + (-2+3-4+5\cdots) + (-2+3-4+5\cdots) + (3-4+5-6\cdots)$ $4s = 1 + [ (1-2-2+3) + (-2+3+3-4) + (3-4-4+5) + (-4+5+5-6) \cdots]$ $4s=1 + [0 + 0 + 0 + 0 \cdots]$ $4s=1$ $s=\dfrac{1}{4}$ And voilà, a mathematical paradox: An interesting pattern that always continues and at the same time ends up somewhere you weren't expecting in the least. - I wouldn't say that this is a paradox - according to standard analysis, the argument you give assumes that the series tends to a limit, and then shows that that limit must be $1/4$. The series in fact diverges. Of course, if you extend the notion of 'convergent series', you can give some meaning to the statement that this series equals $1/4$, but by then you've got so far from the idea of convergence that it seems disingenuous to claim that this represents a paradox. – Donkey_2009 Apr 27 at 13:02 1 @Donkey_2009 Euler called it a paradox, hence my use of the term. I think you're just getting caught in the semantics - it's a very graspable example answering OP's "interesting patterns must always continue" question. That you need a more solid grasp on math to fully understand it is the whole point. – GraphicsMuncher Apr 27 at 19:39 ## protected by Zev Chonoles♦Feb 22 '12 at 6:18 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 173, "mathjax_display_tex": 21, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451014399528503, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/46228/averaging-decibels
# Averaging decibels Wikipedia: The decibel (dB) is a logarithmic unit that indicates the ratio of a physical quantity (usually power or intensity) relative to a specified or implied reference level. If I measure some physical quantity in decibels, then what is the preferred way to calculate the mean of the measured values? Is it enough to simply average them, or should I convert them back to linear scale, calculate the average, and convert it back to decibels (example)? When should I use which approach, and why? - ## 2 Answers The "physically natural" quantity to average is the actual power, or energy, but it depends exponentially on the number of decibels. So if you were averaging the power or energy, the result would be pretty much equal to the power or energy of the largest (loudest) reading in decibels. So even though it's physically less natural, you probably want to compute the average number of decibels itself. But as you said, it's abuot "preferences". Your question isn't a question about observables, it's about subjective choices, so there can't of course be any "only correct and objective" answer. For various applications, various averages may be more or less useful or representative. - Thank you! Do you think that the preference can be guided by the distribution of the measured quantity? I mean, I would average on the scale (linear or decibel) on which the distribution of measured values is more close to the Gaussian distribution. – kol Dec 7 '12 at 17:20 Dear kol, well, everything may matter. But the Gaussian distribution isn't a likely expectation here. It's only relevant if the number of decibels is almost exactly the same, within "one decibel or less". In that case, if someone tries to make the loudness constant, it doesn't matter whether you exponentiate or not. However, decibels are useful exactly because the power of sound in various situations differs by many orders of magnitudes, so the readings in decibels tend to be vastly variable, and the power differs hugely in various situations. – Luboš Motl Dec 7 '12 at 17:48 Quite generally, you may have some noise whose number of decibels is variable. Sometimes an airplane takes off at the airport, and so on. So you will have something between 50 and 130 decibels at a point. This corresponds to a huge variability of power (energy per time) that differs by 8 orders of magnitude (80 decibels). It's so wide and the behavior of the sources as well as impacts are so nonlinear that there's absolutely no reason to expect any simple distribution in this interval that is "amazingly wide", using physical criteria. The Gaussian distribution is only OK for narrow, linearized – Luboš Motl Dec 7 '12 at 17:51 situations so exactly when the decibels become useful as a description of the sound's volume, the reasons to expect the normal distribution evaporate. The same comment applies to any other thing that is described by a logarithmic scale, for example Richter scale for earthquakes or pH for acidity. In those cases, the actual physical quantity also exponentially depends on the reading and this choice is used exactly because the physical quantities (energy in earthquake, concentration of OH- ions) has no trouble to change by many many orders of magnitude. – Luboš Motl Dec 7 '12 at 17:53 Thank you very much! – kol Dec 7 '12 at 18:07 show 1 more comment There are reasons more than "preference" for the averaging. You defined it that way usually because you can get more information from that, particular for those additive quantities. Suppose you preform a set of measurement at a particular point in space, there are two cases: (a) get the averaged value (b) take the average for the intensity itself, and then converted to decibel. If you have the quantity in situation (b), you can know how much average energy flux passing through that point. Also, you can know the total energy flowing through that point. This information cannot be obtained from the method (a). Similar situation for the earthquake, if you take the average for the energy, you can know the total energy released by that particular point, which is important. However, you cannot obtain this information by simply taking the average of earthquake scale. Sure, as pointed out by Lubos, if the variation is small, these two definitions are basically the same as the $\log$ (any) function is local linear, and you can now have additive quantity again. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355130195617676, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/27186/is-there-a-majorana-like-representation-for-singlet-states/27188
# Is there a Majorana-like representation for singlet states? I mean the Majorana representation of symmetric states, i.e., states of $n$ qubits invariant under a permutation of the qudits. See, for example, D. Markham, "Entanglement and symmetry in permutation symmetric states", arXiv:1001.0343v2. By Majorana representation I mean decomposition of a state $$|\psi\rangle = \text{normalization} \times \sum_{perm} |\eta_1\rangle |\eta_2\rangle \cdots |\eta_n\rangle,$$ where $|\eta_k\rangle$ are uniquely determined (up to a global phase in each and the permutation) qubit states. - 5 I think the question should be expanded a little bit. – Marcin Kotowski Sep 15 '11 at 3:20 @UGPhysics: I don't think his question is specific to Markham's paper, but rather he gives it as an example. The Majorana representation was already in existence prior to that paper. – Joe Fitzsimons Sep 15 '11 at 6:32 Hi @Qmechanic, I guess here I could need some help with the tags ... – Dilaton Jan 1 at 21:07 ## 3 Answers Well, there is certainly not a Majorana representation, since any decomposition will have two terms which differ by a phase of -1, you can't find Majorana points. The singlet state is anti-symmetric, so there is no way as writing it in the form $\frac{e^{i\alpha}}{\sqrt{2}} \sum_{j=0}^1 | \phi_{1\oplus j} \rangle \otimes | \phi_{0\oplus j} \rangle$ since it is always in the state $\frac{e^{i\alpha}}{\sqrt{2}} \sum_{j=1}^2 (-1)^j| \phi_{1\oplus j} \rangle \otimes | \phi_{0\oplus j} \rangle$. Here I have taken $\{\phi_{0}, \phi_{1}\}$ as a basis for the single qubit Hilbert space. However, if you want something kinda-sorta like the Majorana representation, you can do the following. The Majorana representation is effectively treating the subsystems like bosons, and hence we are stuck working in the symmetric subspace. However, you can do the exact same thing treating the subsystems as fermions, which will the restrict you to the antisymmetric state for a Hilbert space of that dimension. Another route would simply be to consider states which are LU equivalent to Majorana states, but I have no idea whether this is useful to you (you haven't explained exactly what you want or why you want it). If you just care about entanglement (which is a very common usage) then LU equivalence should be fine. - In the general - the answer is no. Majorana representation's key point is to express a composite state of $n$ qubits as $n$ points is such way, that action of a collective rotation (i.e. $|\psi \rangle \mapsto U^{\otimes n} |\psi \rangle$ for $U\in \text{SU}(2)$) rotates each points in the same way (i.e. $| \eta_k\rangle \mapsto U | \eta_k\rangle$). In other words, $| \eta_k\rangle$ are covariant. Singlet states are, by definition, invariant under the application of the same unitary operation (i.e. $U^{\otimes n}|\psi \rangle = |\psi \rangle$). So its is not possible to create a representation by covariant states. However, if you are asking about a general way to tackle singlet states, there is for example a paper introducing a (non-orthohonal) basis (but with nice properties) for qubit singlet states (D. Lyons, S. Walck, PRA 2008, arXiv:0808.2989). Their reasoning can be generalized to qudit singlet states (I'm writing a paper on it, feel free to ask more). - It seems that you are asking if the fundamental theorem of algebra is true. A symmetric $n$ qubit state can be seen as a homogeneous polynomial of degree $n$ in two variables. The factorization corresponds to the roots of the dehomogenized version of this polynomial. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364354610443115, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?s=c6726230d0bb7c500dde31e04d1c04ea&p=4277220
Physics Forums ## Best way to solve Schrodinger's wave equation numerically. I have been trying to research the best way to solve the Schrodinger wave equation numerically so that I can plot and animate it in Maple. I'd also like to animate as it is affected by a potential. I have been trying for weeks to do this and I don't feel any closer than when I started. I have looked at finite difference method but I get so far and don't know what to do next. Any help would be greatly appreciated. The sort of thing I'm looking for is like in this presentation on youtube, especially at 11s leading onto something like the animation at 13s. Thanks you very much PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Gold Member Can you give more detail? What is the Hamiltonian? In how many dimensions? And there is no link to the YouTube video. Hi, sorry for not posting the link, I was very tired when making this post, definitely an oversight on my part. I will be able to post the link in just over an hour. With regards to dimensions in would only be in 1 dimension along x. Regarding the hamiltonian I am trying to solve the equation as i x hbar x diff(psi, t) = -(hbar^2)/2m x diff(psi, x\$2) + V(x) x psi where psi = psi(x, t). Thank you for replying. Recognitions: Gold Member ## Best way to solve Schrodinger's wave equation numerically. Since your potential is time independent, the fastest way to solve your problem is to first find the eigenfunctions of the Hamiltonian [tex] H \phi_i = E_i \phi_i [/tex] To do this, discretize space and write the Hamiltonian as a matrix. As you said, you can use a finite difference approximation for the momentum operator. Once you have the $\phi_i$, find the initial coefficients of your wave function in this basis, [tex] \psi(x,t=0) = \sum_i c_i \phi_i(x) [/tex] by calculating [tex] c_i = \int \phi_i^*(x) \psi(x,t=0) dx [/tex] Then, the wave function at any time time is simply given by [tex] \psi(x,t) = \sum_i c_i \phi_i(x) \exp(-i E_i t / \hbar) [/tex] By advancing $t$ and refreshing the plot, you will get your animation. I have no idea how to do this in Maple Blog Entries: 1 Recognitions: Science Advisor I have been trying to research the best way to solve the Schrodinger wave equation numerically so that I can plot and animate it in Maple. I'd also like to animate as it is affected by a potential. I have been trying for weeks to do this and I don't feel any closer than when I started. I have looked at finite difference method but I get so far and don't know what to do next. Slide rule, Why not ask this question over on the Diff Eq forum. Tags maple, numerical pde, pde, pde wave, quantum mechanics Thread Tools | | | | |---------------------------------------------------------------------------------|------------------------------------|---------| | Similar Threads for: Best way to solve Schrodinger's wave equation numerically. | | | | Thread | Forum | Replies | | | Math & Science Software | 6 | | | Quantum Physics | 0 | | | Differential Equations | 10 | | | Atomic, Solid State, Comp. Physics | 0 | | | Programming & Comp Sci | 7 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221542477607727, "perplexity_flag": "middle"}
http://catalog.flatworldknowledge.com/bookhub/reader/1?e=baranoff-ch03_s02
# Risk Management for Enterprises and Individuals, v. 1.0 by Etti Baranoff, Patrick L. Brockett, and Yehuda Kahane Study Aids: Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards). Study Pass: Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes. Highlighting and Taking Notes: If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination. Printing: If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections. Search: To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result). View Full Student FAQs ## 3.2 Uncertainty, Expected Value, and Fair Games ### Learning Objectives • In this section we discuss the notion of uncertainty. Mathematical preliminaries discussed in this section form the basis for analysis of individual decision making in uncertain situations. • The student should pick up the tools of this section, as we will apply them later. As we learned in the chapters Chapter 1 "The Nature of Risk: Losses and Opportunities" and Chapter 2 "Risk Measurement and Metrics", risk and uncertainty depend upon one another. The origins of the distinction go back to the Mr. Knight,See Jochen Runde, “Clarifying Frank Knight’s Discussion of the Meaning of Risk and Uncertainty,” Cambridge Journal of Economics 22, no. 5 (1998): 539–46. who distinguished between risk and uncertainty, arguing that measurable uncertainty is risk. In this section, since we focus only on measurable uncertainty, we will not distinguish between risk and uncertainty and use the two terms interchangeably. As we described in Chapter 2 "Risk Measurement and Metrics", the study of uncertainty originated in games of chance. So when we play games of dice, we are dealing with outcomes that are inherently uncertain. The branch of science of uncertain outcomes is probability and statistics. Notice that the analysis of probability and statistics applies only if outcomes are uncertain. When a student registers for a class but does not attend any lectures nor does any assigned work or test, only one outcome is possible: a failing grade. On the other hand, if the student attends all classes and scores 100 percent on all tests and assignments, then too only one outcome is possible, an “A” grade. In these extreme situations, no uncertainty arises with the outcomes. But between these two extremes lies the world of uncertainty. Students often do research on the instructor and try to get a “feel” for the chance that they will make a particular grade if they register for an instructor’s course. Even though we covered some of this discussion of probability and uncertainty in Chapter 2 "Risk Measurement and Metrics", we repeat it here for reinforcement. Figuring out the chance, in mathematical terms, is the same as calculating the probability of an event. To compute a probability empirically, we repeat an experiment with uncertain outcomes (called a random experiment) and count the number of times the event of interest happens, say n, in the N trials of the experiment. The empirical probability of the event then equals n/N. So, if one keeps a log of the number of times a computer crashes in a day and records it for 365 days, the probability of the computer crashing on a day will be the sum of all of computer crashes on a daily basis (including zeroes for days it does not crash at all) divided by 365. For some problems, the probability can be calculated using mathematical deduction. In these cases, we can figure out the probability of getting a head on a coin toss, two aces when two cards are randomly chosen from a deck of 52 cards, and so on (see the example of the dice in Chapter 2 "Risk Measurement and Metrics"). We don’t have to conduct a random experiment to actually compute the mathematical probability, as is the case with empirical probability. Finally, as strongly suggested before, subjective probability is based on a person’s beliefs and experiences, as opposed to empirical or mathematical probability. It may also depend upon a person’s state of mind. Since beliefs may not always be rational, studying behavior using subjective probabilities belongs to the realm of behavioral economics rather than traditional rationality-based economics. So consider a lottery (a game of chance) wherein several outcomes are possible with defined probabilities. Typically, outcomes in a lottery consist of monetary prizes. Returning to our dice example of Chapter 2 "Risk Measurement and Metrics", let’s say that when a six-faced die is rolled, the payoffs associated with the outcomes are $1 if a 1 turns up,$2 for a 2, …, and $6 for a 6. Now if this game is played once, one and only one amount can be won—$1, \$2, and so on. However, if the same game is played many times, what is the amount that one can expect to win? Mathematically, the answer to any such question is very straightforward and is given by the expected value of the game. In a game of chance, if $W 1 , W 2 ,…, W N$ are the N outcomes possible with probabilities $π 1 , π 2 ,…, π N$ , then the expected value of the game (G) is $E(U)= ∑ i=1 ∞ π i U( W i )= 1 2 ×ln( 2 )+ 1 4 ×ln( 4 )+…= ∑ i=1 ∞ 1 2 i ln( 2 i ).$ The computation can be extended to expected values of any uncertain situation, say losses, provided we know the outcome numbers and their associated probabilities. The probabilities sum to 1, that is, $∑ i=1 N π i = π 1 +…+ π N =1.$ While the computation of expected value is important, equally important is notion behind expected values. Note that we said that when it comes to the outcome of a single game, only one amount can be won, either $1,$2, …, $6. But if the game is played over and over again, then one can expect to win $E( G )= 1 6 1+ 1 6 2…+ 1 6 6=$3.50$ per game. Often—like in this case—the expected value is not one of the possible outcomes of the distribution. In other words, the probability of getting$3.50 in the above lottery is zero. Therefore, the concept of expected value is a long-run concept, and the hidden assumption is that the lottery is played many times. Secondly, the expected valueThe sum of the products of two numbers, the outcomes and their associated probabilities. is a sum of the products of two numbers, the outcomes and their associated probabilities. If the probability of a large outcome is very high then the expected value will also be high, and vice versa. Expected value of the game is employed when one designs a fair gameGame in which the cost of playing equals the expected winnings of the game, so that net value of the game equals zero.. A fair game, actuarially speaking, is one in which the cost of playing the game equals the expected winnings of the game, so that net value of the game equals zero. We would expect that people are willing to play all fair value games. But in practice, this is not the case. I will not pay $500 for a lucky outcome based on a coin toss, even if the expected gains equal$500. No game illustrates this point better than the St. Petersburg paradox. The paradox lies in a proposed game wherein a coin is tossed until “head” comes up. That is when the game ends. The payoff from the game is the following: if head appears on the first toss, then $2 is paid to the player, if it appears on the second toss then$4 is paid, if it appears on the third toss, then $8, and so on, so that if head appears on the nth toss then the payout is$2n. The question is how much would an individual pay to play this game? Let us try and apply the fair value principle to this game, so that the cost an individual is willing to bear should equal the fair value of the game. The expected value of the game E(G) is calculated below. The game can go on indefinitely, since the head may never come up in the first million or billion trials. However, let us look at the expected payoff from the game. If head appears on the first try, the probability of that happening is $1 2 ,$ and the payout is $2. If it happens on the second try, it means the first toss yielded a tail (T) and the second a head (H). The probability of TH combination $= 1 2 × 1 2 = 1 4 ,$ and the payoff is$4. Then if H turns up on the third attempt, it implies the sequence of outcomes is TTH, and the probability of that occurring is $1 2 × 1 2 × 1 2 = 1 8$ with a payoff of \$8. We can continue with this inductive analysis ad infinitum. Since expected is the sum of all products of outcomes and their corresponding probabilities, $E( G )= 1 2 ×2+ 1 4 ×4+ 1 8 ×8+…=∞.$ It is evident that while the expected value of the game is infinite, not even the Bill Gateses and Warren Buffets of the world will give even a thousand dollars to play this game, let alone billions. Daniel Bernoulli was the first one to provide a solution to this paradox in the eighteenth century. His solution was that individuals do not look at the expected wealth when they bid a lottery price, but the expected utility of the lottery is the key. Thus, while the expected wealth from the lottery may be infinite, the expected utility it provides may be finite. Bernoulli termed this as the “moral value” of the game. Mathematically, Bernoulli’s idea can be expressed with a utility function, which provides a representation of the satisfaction level the lottery provides. Bernoulli used $U( W )=ln(W)$ to represent the utility that this lottery provides to an individual where W is the payoff associated with each event H, TH, TTH, and so on, then the expected utility from the game is given by $E(U)= ∑ i=1 ∞ π i U( W i )= 1 2 ×ln( 2 )+ 1 4 ×ln( 4 )+…= ∑ i=1 ∞ 1 2 i ln( 2 i , )$ which can be shown to equal 1.39 after some algebraic manipulation. Since the expected utility that this lottery provides is finite (even if the expected wealth is infinite), individuals will be willing to pay only a finite cost for playing this lottery. The next logical question to ask is, What if the utility was not given as natural log of wealth by Bernoulli but something else? What is that about the natural log function that leads to a finite expected utility? This brings us to the issue of expected utility and its central place in decision making under uncertainty in economics. ### Key Takeaways • Students should be able to explain probability as a measure of uncertainty in their own words. • Moreover, the student should also be able to explain that any expected value is the sum of product of probabilities and outcomes and be able to compute expected values. ### Discussion Questions 1. Define probability. In how many ways can one come up with a probability estimate of an event? Describe. 2. Explain the need for utility functions using St. Petersburg paradox as an example. 3. Suppose a six-faced fair die with numbers 1–6 is rolled. What is the number you expect to obtain? 4. What is an actuarially fair game? Close Search Results Study Aids Need Help? Talk to a Flat World Knowledge Rep today: • 877-257-9243 • Live Chat • Contact a Rep Monday - Friday 9am - 5pm Eastern We'd love to hear your feedback! Leave Feedback! Edit definition for #<Bookhub::ReaderController:0x00000010962f20> show #<Bookhub::ReaderReporter:0x00000010991ac8> 71728
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335767030715942, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/36992/list
## Return to Answer 4 deleted 25 characters in body Just a quick thought about showing that $3^n\alpha_n$ is integral. It's not too hard to prove via the Catalan number generating function that `\[\sum_{k=0}^{p-1} C_kx^k\equiv \frac{1-(1-4x)^{(p+1)/2}}{2x}-x^{p-1}\quad\bmod (p,x^p).\]` Now apply $D=x\frac{d}{dx}$ a bunch of times and set $x=1$ to get the sums you want. This will be in $\mathbb{Z}[1/6]$ but Then by using $(-3)^{(p-1)/2}\equiv \left(\frac{p}{3}\right)\bmod p$ you can control the 3-adic valuation (you get an extra 3 for each differentiation) and see that the only dependence on $p$ is the Legendre symbol $\left(\frac{p}{3}\right)$ which furthermore depends only on $p$ modulo 3. There's also a 2 in the denominator but I guess by looking a bit more carefully you can show this cancels out for $p$ not equal to 3. This is a bit sketchy but hopefully the idea is there. In particular, letting Sage do the work, I get `\[\sum_{k=0}^{p-1}C_k\equiv \frac{{\left(\frac{p}{3}\right)}-1}{2}\quad\bmod p\]` `\[\sum_{k=0}^{p-1}kC_k\equiv \frac{{-\left(\frac{p}{3}\right)}+1}{2}\quad\bmod p\]` `\[\sum_{k=0}^{p-1}k^2C_k\equiv \frac{-{\left(\frac{p}{3}\right)}-3}{6}\quad\bmod p\]` which seem to fit your sequence as well as the results in the Zhi-Wei Sun paper. I can't see how to get at the $\beta_j$ though. Edited to add: it would appear that one can maybe do the same trick using `\[\sum_{k=0}^{p-1} C_kx^k\equiv \frac{1-(1-4x)^{(p^2+1)/2}}{2x}-px^{p-1}\quad\bmod (p^2,x^p)\]` to get the $\beta_j$ as well, though I'm not so convinced. 3 added 209 characters in body Just a quick thought about showing that $3^n\alpha_n$ is integral. It's not too hard to prove via the Catalan number generating function that `\[\sum_{k=0}^{p-1} C_kx^k\equiv \frac{1-(1-4x)^{(p+1)/2}}{2x}-x^{p-1}\quad\bmod (p,x^p).\]` Now apply $D=x\frac{d}{dx}$ a bunch of times and set $x=1$ to get the sums you want. This will be in $\mathbb{Z}[1/6]$ but by using $(-3)^{(p-1)/2}\equiv \left(\frac{p}{3}\right)\bmod p$ you can control the 3-adic valuation (you get an extra 3 for each differentiation) and see that the only dependence on $p$ is the Legendre symbol $\left(\frac{p}{3}\right)$ which furthermore depends only on $p$ modulo 3. There's also a 2 in the denominator but I guess by looking a bit more carefully you can show this cancels out for $p$ not equal to 3. This is a bit sketchy but hopefully the idea is there. In particular, letting Sage do the work, I get `\[\sum_{k=0}^{p-1}C_k\equiv \frac{{\left(\frac{p}{3}\right)}-1}{2}\quad\bmod p\]` `\[\sum_{k=0}^{p-1}kC_k\equiv \frac{{-\left(\frac{p}{3}\right)}+1}{2}\quad\bmod p\]` `\[\sum_{k=0}^{p-1}k^2C_k\equiv \frac{-{\left(\frac{p}{3}\right)}-3}{6}\quad\bmod p\]` which seem to fit your sequence as well as the results in the Zhi-Wei Sun paper. I can't see how to get at the $\beta_j$ though. Edited to add: it would appear that one can maybe do the same trick using `\[\sum_{k=0}^{p-1} C_kx^k\equiv \frac{1-(1-4x)^{(p^2+1)/2}}{2x}-px^{p-1}\quad\bmod (p^2,x^p)\]` to get the $\beta_j$ as well. 2 minor corrections Just a quick thought about showing that $3^n\alpha_n$ is integral. It's not too hard to prove via the Catalan number generating function that `\[\sum_{k=0}^{p-1} C_kx^k\equiv \frac{1-(1-4x)^{(p+1)/2}}{2x}-x^{p-1}\quad\bmod (p,x^p).\]` Now apply $D=x\frac{d}{dx}$ a bunch of times and set $x=1$ to get the sums you want. This will be in $\mathbb{Z}[1/6]$ but by using $(-3)^{(p-1)/2}\equiv \left(\frac{p}{3}\right)\bmod p$ you can control the 3-adic valuation (you get an extra 3 for each differentiation) and see that the only dependence on $p$ is the Lagrange Legendre symbol $\left(\frac{p}{3}\right)$ which furthermore depends only on $p$ modulo 3. There's also a 2 in the denominator but I guess by looking a bit more carefully you can show this cancels out for $p$ not equal to 3. This is a bit sketchy but hopefully the idea is there. In particular, letting Sage do the work, I get `\[\sum_{k=0}^{p-1}C_k\equiv \frac{{\left(\frac{p}{3}\right)}-1}{2}\quad\bmod p\]` `\[\sum_{k=0}^{p-1}kC_k\equiv \frac{{-\left(\frac{p}{3}\right)}+1}{2}\quad\bmod p\]` `\[\sum_{k=0}^{p-1}k^2C_k\equiv \frac{-{\left(\frac{p}{3}\right)}-3}{6}\quad\bmod p\]` which seem to fit your sequence aswell as well as the results in the Zhi-Wei Sun paper. I can't see how to get at the $\beta_j$ though. 1 Just a quick thought about showing that $3^n\alpha_n$ is integral. It's not too hard to prove via the Catalan number generating function that `\[\sum_{k=0}^{p-1} C_kx^k\equiv \frac{1-(1-4x)^{(p+1)/2}}{2x}-x^{p-1}\quad\bmod (p,x^p).\]` Now apply $D=x\frac{d}{dx}$ a bunch of times and set $x=1$ to get the sums you want. This will be in $\mathbb{Z}[1/6]$ but by using $(-3)^{(p-1)/2}\equiv \left(\frac{p}{3}\right)\bmod p$ you can control the 3-adic valuation (you get an extra 3 for each differentiation) and see that the only dependence on $p$ is the Lagrange symbol $\left(\frac{p}{3}\right)$ which furthermore depends only on $p$ modulo 3. There's also a 2 in the denominator but I guess by looking a bit more carefully you can show this cancels out for $p$ not equal to 3. This is a bit sketchy but hopefully the idea is there. In particular, letting Sage do the work, I get `\[\sum_{k=0}^{p-1}C_k\equiv \frac{{\left(\frac{p}{3}\right)}-1}{2}\quad\bmod p\]` `\[\sum_{k=0}^{p-1}kC_k\equiv \frac{{-\left(\frac{p}{3}\right)}+1}{2}\quad\bmod p\]` `\[\sum_{k=0}^{p-1}k^2C_k\equiv \frac{-{\left(\frac{p}{3}\right)}-3}{6}\quad\bmod p\]` which seem to fit your sequence aswell as the results in the Zhi-Wei Sun paper. I can't see how to get at the $\beta_j$ though.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9602051377296448, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/208923-lipschitz-continuous.html
# Thread: 1. ## Lipschitz continuous This question is about lipschitz continuous, i think the way to check if the solutions can be found as fixed points is just differentiating f(t), but i'm not sure about this. Can anyone give me some hints please? I will really appreciate if you can give me some small hints. 2. ## Re: Lipschitz continuous You need to check one of the assumptions of Picard–Lindelöf theorem. Namely, denote the Picard operator above by $P(f)$ and the right-hand side of the differential equation by $F(t,x(t))$. If for some constant $q$ we have $|P(f)(t)-P(g)(t)|\le q\|f-g\|_\infty$ for all $t\in[0,T]$, then $\|P(f)-P(g)\|_\infty\le q\|f-g\|_\infty$. We want to find $T$ such that $0\le q<1$. For any $t\in[0,T]$, $\begin{align*}|P(f)(t)-P(g)(t)|&\le\left|\int_0^t F(s,f(s))\,ds-\int_0^t F(s,g(s))\,ds\right|\\&=\left|\int_0^t (F(s,f(s))-F(s,g(s)))\,ds\right|\\&\le\int_0^t |F(s,f(s))-F(s,g(s))|\,ds\\&\le\int_0^t L|f(s)-g(s)|\,ds\\&\le\int_0^t L\|f(s)-g(s)\|_\infty\,ds\\&\le TL\|f(s)-g(s)\|_\infty\end{align*}$ where $L$ is the Lipschitz constant of $F$ with respect to the second argument. 3. ## Re: Lipschitz continuous How can we find the Lipschitz constant? and i'm not sure about the first part, the checking part either, can you explain it more clearly? 4. ## Re: Lipschitz continuous "check that the solutions can be found as fixed points of the map" 5. ## Re: Lipschitz continuous Originally Posted by lahuxixi How can we find the Lipschitz constant? and i'm not sure about the first part, the checking part either, can you explain it more clearly? You said that you wanted only small hints. You don't have to follow the link to the theorem; I wrote the relevant part of the proof. The restriction on T that guarantees that $P$ is a contraction constitutes one of the assumptions of the theorem. So, we have $F(t,x)=2\cos(tx^2)$. To prove that for every $t\in[0,T]$, $F$ is Lipschitz continuous w.r.t. $x$, we need to find a constant $L$ such that for every $t$, $x_1$ and $x_2$ it is the case that $|F(t,x_1)-F(t,x_2)|\le L|x_1-x_2|$. Using trigonometry, $\begin{align*}|F(t,x_1)-F(t,x_2)|&=4|\sin(t(x_1^2+x_2^2)/2)\sin(t(x_1^2-x_2^2)/2)|\\&\le4|\sin(t(x_1^2-x_2^2)/2)|\\&\le2|t(x_1^2-x_2^2)|\\&=2t|x_1+x_2|\cdot|x_1-x_2|\\&\le2T|x_1+x_2|\cdot|x_1-x_2|\end{align*}$ So, we need to find $L$ such that $2T|x_1+x_2|\le L$ for all $t\in[0,T]$. In particular, we need to bound $|x_1+x_2|\le|x_1|+|x_2|$ from above. It is clear that $F(t,x)$ is not Lipschitz on the whole $[0,T]\times\mathbb{R}$. Indeed, given any $t$, if we take larger and larger $x$, the cosine starts to oscillate faster and faster. For this reason, I am starting to doubt that $P$ is a contraction on all continuous functions on $[0, T]$. However, it is possible to find a positive constant $U$ such that $P$ is a contraction in $\{f:C[0,T]:\|f\|\le U\}$. This is sufficient for Picard theorem if we show that $P$ maps $\{f:C[0,T]:\|f\|\le U\}$ into itself. In this respect, note that $|P(f)|\le 1+\int_0^t 2|\cos(sf^2(s))|\,ds\le1+2T$. 6. ## Re: Lipschitz continuous thank you for your help, its because i didnt think that i wouldnt be able to understand, guess i will have to revise now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 39, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448938369750977, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=204051
Physics Forums ## Magnetisation due to conduction electrons 1. The problem statement, all variables and given/known data Due to its spin, the electron possesses a magnetic moment μB. Treating the conduction electrons in a metal as a free electron gas, obtain an expression for the magnetization due to the magnetic moments of the conduction electrons, when placed in a magnetic field. Evaluate this expression at zero temperature. 3. The attempt at a solution Ok. I have some hints for this Q, but they're confusing me. My vague guess would have been to try and calculate the number of electrons in the higher energy state (which, I think, would be the ones aligned with the field) and multiply by $$\frac{\mu_{\beta}}{V}$$ to obtain the overall magnetisation (the magnetic moment per unit volume). The hint I have here is that $$N_{\pm} = \frac{1}{2}\int_{0}^{\epsilon_{F}+\mu_{B}H} \rho(\epsilon) d\epsilon = \frac{4\pi V}{3h^3} (2m)^{3/2} (\epsilon_{F}\pm\mu_{B}H)^{3/2}$$ All this seems to be doing (to a beginner) is summing the possible states, not obtaining the actual number of either spin parallel/anti-parallel electrons (ie. occupants of states). And presumably the no. of states is vast. If we took the N+ version of that, I take it we'd be getting the no. of states up to the energy level of the electrons with the higher spin energy (and thus including the states of electrons with lower spin energy); if we took the N- version, just the lower energy states. $$N_{+} - N_{-}$$ would then, I presume, give us the number of states of the higher energy spins (most of which would be unoccupied). I am not sure why $$\rho(\epsilon)$$ has been written like this. By thinking about points in a positive octant (derivation not given here) one arrives at a no. of points $$G(\epsilon) = \frac{4\pi V}{3h^{3}}(2m\epsilon)^{3/2}$$ which can be expressed (I'll switch to his rho here) $$\rho(\epsilon) d\epsilon = \frac{dG(\epsilon)}{d\epsilon} = \frac{4\pi V}{h^{3}}(2m\epsilon)^{1/2} d\epsilon$$ $$\rho(\epsilon) d\epsilon$$, I take it, gives us the density of states. Surely we need some integral that multiplies that by a distribution function ($$e^{-\beta\epsilon}$$?) in order to state the no. of states up to some energy $$\epsilon$$ Well, I'm a bit confused (isn't it obvious) and not really sure what I'm doing. If someone could shed light on this question and show me how to put this together, I think it would open up some of the other problems on the sheet. (I haven't seen any examples of this sort of thing, unfortunately). Cheers! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Blog Entries: 9 Recognitions: Homework Help Science Advisor There is a derivation of this (I think) in Kittel, Pauli magnetism. Recognitions: Gold Member Homework Help Also, if you want an internet resource, try the online lecture notes 20 and 21 from here which is about Pauli paramagnetism of a free en gas. Thread Tools | | | | |----------------------------------------------------------------|------------------------------------|---------| | Similar Threads for: Magnetisation due to conduction electrons | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 1 | | | Advanced Physics Homework | 2 | | | Atomic, Solid State, Comp. Physics | 5 | | | Classical Physics | 4 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307363033294678, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/96078/are-semi-direct-products-categorical-limits
## Are semi-direct products categorical limits? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Products, are very elementary forms of categorical limits. My question is whether in the category of groups, semi-direct products are categorical limits. As was pointed in: http://unapologetic.wordpress.com/2007/03/08/split-exact-sequences-and-semidirect-products/ Bourbaki (General Topology, Prop. 27) gives a universal property: Let $f \colon N \to G$, $g \colon H \to G$ be two homomorphisms into a group $G$, such that $f(\phi_h(n)) = g(h)f(n)g(h^{-1})$ for all $n \in N$, $h \in H$. Then there is a unique homomorphism $k \colon N \rtimes H \to G$ extending $f$ and $g$ in the usual sense. However, I remain unsatisfied. The condition $f(\phi_h(n)) = g(h)f(n)g(h^{-1})$ is a condition on elements of groups, rather than a condition that says that some diagram is commutative. So the question remains: are semi-direct products in the category of groups categorical limits? - 3 It's a certain colimit. Do you know the Grothendieck construction for fibrations? – Martin Brandenburg May 5 2012 at 17:43 1 Colimit? Are you sure about what you're saying? After all products are a particular case of semi-direct products, and they are limits not colimits. I don't know anything about Grothendieck's construction for fibrations... – Makhalan Duff May 5 2012 at 17:46 7 Sure, colimit, because you describe maps on the semi-direct product. Actually this universal property of the semi-direct products in the special case of products does not give you the usual universal property of a categorical product: It gives you that group morphisms $N \times H \to G$ are given by pairs of group morphisms $N \to G$, $H \to G$ which commute pointwise. In other words, $N \times H = N * H / \langle\langle nhn^{-1} h^{-1} \rangle\rangle$, and here you already see the colimit. In a general semi-direct product, this commutator is twisted. – Martin Brandenburg May 5 2012 at 17:52 1 Semi-direct products involve an action $\phi: H \to Aut(N)$ in addition to the two factor groups $H,N$. So the first step would be to figure out how to describe this action in purely category-theoretic terms, without referencing individual elements. I don't know how to do this, but suspect that if one can achieve this, then describing the semi-direct product category-theoretically should be straightforward. – Terry Tao May 5 2012 at 18:09 2 Semidirect products of profinite groups are special because a compactness argument lets you prove one can obtain the action as an inverse limit of actions of finite quotients. – Benjamin Steinberg May 5 2012 at 21:47 show 16 more comments ## 2 Answers This is a partial answer, summing up some of my comments. The semi-direct product is not a limit, but rather it is a colimit. The reason is that the universal property cited above describes maps on the semi-direct product. In the special case that $\phi$ is the trivial action, the semi-direct product becomes the direct product $N \times H$ and the universal property is not just the usual universal property as a product, but rather as a representing object of the pairs of morphisms on $N,H$ which commute pointwise. In a general semi-direct product, this commutation is twisted by an action of $H$ on $N$. So basically the idea is that we have the coproduct $N * H$ of the two groups (which is usually called the free product, which is quite unfortunate), and we impose the relation $h n h^{-1} = \phi_h(n)$. The universal property of $N \rtimes H$ is equivalent to the isomorphism $$N \rtimes H = (N * H) / \{h n h^{-1}= \phi_h(n)\}_{h \in H, n \in N},$$ which exhibits $N \rtimes H$ as a special colimit of some diagram associated to $N,H,\phi$. However, this still uses elements in the relations. I think we cannot get rid of these elements, unless we use $2$-colimits. See below. Actually this isomorphism is used very often in group theory in order to recoqnize groups given by some presentation as a semi-direct product. For example, the dihedral group $D_n = \langle r,s : r^n = s^2 = 1, srs=r^{-1} \rangle$ is $\mathbb{Z}/n \rtimes \mathbb{Z}/2$. On the other hand, there is a purely category-theoretic construction which is due to Grothendieck: Let $I$ be a small category and $F : I \to \mathsf{Cat}$ be a diagram of small categories. The Grothendieck construction $\int^I F$ is the category of pairs $(i,x)$, where $i$ is an object of $I$ and $x$ is an object of $F(i)$. A morphism $(i,x) \to (j,y)$ is a pair $(a,f)$, consisting of a morphism $f : i \to j$ and a morphism $a : F(f)(x) \to y$ in $F(j)$. The composition is defined by the rule $(a_2,f_2) \circ (a_1,f_1) = (a_2 \circ F(f_2)(a_1),f_2 \circ f_1)$. Now if $H$ is a monoid, considered as a category with just one object $*$, and $F : H \to \mathsf{Cat}$ is a diagram such that $F(*)=N$ is just a monoid, then $F$ corresponds to a homomorphism of monoids $H \to \mathrm{End}(N)$ and the Grothendieck construction $\int^H N$ has just one object, thus corresponds to a monoid, namely what is usually called the semi-direct product $N \rtimes H$. This is shown by the multiplication rule above. Back to the general case of a diagram $F : I \to \mathsf{Cat}$, the Grothendieck construction $\int^I F$ is the lax 2-colimit of $F$. I don't know the original reference right now, but a very comprehensive account on that is the Appendix A in "The stack of microlocal sheaves" by I. Waschkies. The choice of the morphism $a : F(f)(x) \to y$ in the definition above is precisely the reason for the "2" here. If it was the identity, we would get the usual colimit. Thus, the semi-direct product $N \rtimes H$ is the lax $2$-colimit of the diagram $N : H \to \mathrm{Cat}$. - In your answere $H$ have to be a group (instead change $Aut(N)$ by $End(N)$). If $H$ and $N$ are internal groups, and we can internalizing $Aut(N)$ with a natural map $Aut(N)\times N\to N$ (this is possible in enought good enriched $V$-categories), then the semidirected composition has a internal formulation, furthermore I think that a action $\phi: L\to Aut(L)$ could be a pseudo-functor (consider groups as 2-cetegories, with one object on one (identity) morphism), then I guess that the semidirect-products is some kind of a (co)lax-colimit ...(sorry for my bad English) – Buschi Sergio May 5 2012 at 20:20 Thanks, I've edited the answer. – Martin Brandenburg May 5 2012 at 21:30 If you want, a traditional source on limits (and variations of these) on 2-categories is J. W. Gray "adjointness for 2-categories" (LNM 391). – Buschi Sergio May 6 2012 at 10:35 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is (another ?) description of the crossed product in categorical terms. Let ${\rm Mor}(Gp)$ be the category whose objects are homomorphisms of groups and morphisms are commutative diagrams. Let $C$ be the category of "groups acting on groups" whose objects are pairs of groups $(H,G)$ together with a homomorphism $H \to {\rm Aut}(G)$. Morphisms in this category are equivariant homomorphisms. Now, there is a natural forgetful functor $T \colon {\rm Mor}(Gp) \to C$ which sends $H \to G$ to the pair $(H,G)$ with the homomorphism $H \to {\rm Aut}(G)$ given by conjugation. Now, almost by definition, the crossed product is the left-adjoint of this forgetful functor. Indeed, the left adjoint is easily seen to map $(H,G)$ with $H \to {\rm Aut}(G)$ to the inclusion $H \to G \rtimes H$. Being a left-adjoint, the "crossed product" maps colimits to colimits. - This is a very concise categorical description! – Martin Brandenburg May 3 at 15:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217973947525024, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=2586394
Physics Forums Is curvature guaranteed if only one connection coefficient is 'large' A (3-d or higher) metric which is flat except for one non-trivial metric function of a different coordinate - eg changing dx2 to f(y)dx2 in Euclidean or Minkowski metric [but not f(x)dx2] - is curved if f(y) has a non-zero second derivative; there is no way to make the f(y) 'disappear', ie to make the metric flat, via a coordinate transformation. Einstein's 'entwurf' metric was of this type. Would it be true to say that the metric of this type can't be made flat because there are no other metric functions to 'cancel' with this one during the coordinate transformations? And thus the presence of only one such non-trivial metric function guarantees curvature? The classical limit (Newtonian regime) of the r-geodesic equation for the Schwarzschild (and entwurf) metric has one dominant connection coefficient, all the others are 'small' (or 'much smaller'), which is why it reduces to the Newtonian gravity acceleration equation. Would it be true to say that no coordinate transformation could make all the connection coefficients 'small', so that the presence of only one connection coefficient - or one very dominant one - in a particular coordinate system guarantees curvature? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Quote by avirab A (3-d or higher) metric which is flat except for one non-trivial metric function of a different coordinate - eg changing dx2 to f(y)dx2 in Euclidean or Minkowski metric [but not f(x)dx2] - is curved if f(y) has a non-zero second derivative; there is no way to make the f(y) 'disappear', ie to make the metric flat, via a coordinate transformation. Einstein's 'entwurf' metric was of this type. Would it be true to say that the metric of this type can't be made flat because there are no other metric functions to 'cancel' with this one during the coordinate transformations? And thus the presence of only one such non-trivial metric function guarantees curvature? I think you mean that if there is no direct coordinates transformation to make a Minkowski spacetime out of a given metric $$g_{\mu\nu}(x^{\alpha})$$ with a non-vanishing second derivative, let n be 4 and $$\alpha=0,..,3$$, for the sake of convenience, then that spacetime is curved, this is true in the sense that $$g_{\mu\nu}(x^{\alpha})\rightarrow \eta_{\mu\nu}$$ cannot be obtained through an explicit coordinates transformation $$x^{\alpha}\rightarrow \bar{x}^{\alpha}$$. But be careful about this. For example, $$d\bar{s}^2 = d\bar{t}^2 +2\bar{x}^2d\bar{t}d\bar{x}-(1-\bar{x}^4)d\bar{x}^2,$$ has a non-vanishing second derivatives wrt $$\bar{x}$$, but it can be made Minkowski through $$\bar{x}=x$$ and $$\bar{t}=t-x^3/3$$ at every point. This last bold-face is so important in our observation of flat spacetimes and distinguishing them with the curved ones. But if $$g_{\mu\nu}$$ be a function of, say, an auxiliary parameter $$X$$ which does not count as a part of coordinate system by which the metric is described, then the first derivatives of metric vanish iff there is no relation between $$X$$ and coordinates! Such metric is already flat; for example, $$d\bar{s}^2 = -X^2d\bar{t}^2 +(4X^4+6)d\bar{x}^2,$$ is flat because there exists $$x^{\alpha}:= t= \bar{t}X,x=\bar{x}\sqrt{4X^4+6}.$$ Furthermore even if we assume that there is no such coordinate system, all Christoffel symbols vanish and so does Riemann tensor because X acts as a constant in the operation of differentiation. Here is a very stringent point: You can look at the first metric and say: even if all first derivatives of a metric tensor do not vanish, yet the metric can be flat. This is so tricky because the statement "if all first derivatives of a metric tensor vanish, then it is flat" is only correct when there is no direct coordinates transformation to bring $$g_{\mu\nu}$$ to $${\eta}_{\mu\nu}$$. For example, the Schwarzschild metric is by no means flat because a) there is no such direct coordinates transformation, b) the first derivatives of its metric components do not vanish. Only metric transformations can form a Minkowski version of Schwartzchild metric which in general are not considered coordinates transformation and that they just work locally not globally. The whole thing is known as "local inertia" or "local flatness" and as to how to get it, see here. The classical limit (Newtonian regime) of the r-geodesic equation for the Schwarzschild (and entwurf) metric has one dominant connection coefficient, all the others are 'small' (or 'much smaller'), which is why it reduces to the Newtonian gravity acceleration equation. Would it be true to say that no coordinate transformation could make all the connection coefficients 'small', so that the presence of only one connection coefficient - or one very dominant one - in a particular coordinate system guarantees curvature? As the existence of the non-vanishing Christoffel symbols always does not guarantee that spacetime is curved, so your claim can be locally true! Remember that if r is so small, then the first component of metric and consequently its first derivative in SM gets so large, and thus one cannot make a good realization of whether metric is curved or not. The inverse statement is also true; if r is so large, then SM tends to MM because the first derivatives of metric components are so small. AB Edit: Some errors were corrected. Thanks for the reply but it does not directly relate to my question : I wrote specifically about a flat metric with only ONE additional metric function, as in the example I gave, and in Einstein's entwurf metric. Calculating the Riemann tensor for such a metric (for the conditions stated) shows that it indeed is curved, so that part is not my question. I am asking about the legitimacy of the 'reasoning', and then asking something similar regarding connection coefficients, again where there is only ONE, or only one which is 'large'. (I also would like to know whether a statement to this effect can be made for a non-metric connection) [BTW: The second metric you brought has something missing,dX?dx?] Is curvature guaranteed if only one connection coefficient is 'large' Quote by avirab Thanks for the reply but it does not directly relate to my question : I wrote specifically about a flat metric with only ONE additional metric function, as in the example I gave, and in Einstein's entwurf metric. Calculating the Riemann tensor for such a metric (for the conditions stated) shows that it indeed is curved, so that part is not my question. I am asking about the legitimacy of the 'reasoning', and then asking something similar regarding connection coefficients, again where there is only ONE, or only one which is 'large'. (I also would like to know whether a statement to this effect can be made for a non-metric connection) I don't know what you mean by "Einstein's entwurf metric". In the years priror to his derivation of covariant field equations, Einstein eventually realized that his "Entwurf" field equations that were found during 1907-1912 are all wrong! I don't understand what you are trying to give away from something proven already wrong, but maybe it would be awesome to write exact metric you are referring to using Latex or put a typed version of it here or at least give us some article to know what it is all about! [BTW: The second metric you brought has something missing,dX?dx?] I fixed it! AB da2 + db2 + dc2 is flat. f2(b)da2 + db2 + dc2 is not flat if f(b) has a non-zero second derivative. That's it. This form of metric is interesting for several reasons 1) because it is in some sense the simplest change one can make to a flat metric to make it curved. 2) This is basically the metric Einstein presented in 1913 (the 'entwurf' metric) as a model of gravity. 3) In 4-d spacetime, f''(x) = 0 can be for example something like a uniform field If there are two metric functions, eg f(b)da2 + h(a)db2 + dc2, then in theory there can be coordinate transformations which make these 'cancel' each other, so that they are both 1. It of course depends on what functions f and h are. But if h = 1, since Rtrtr ~ f''/f , then if has a non-zero 2nd derivative, there's no way to make the metric flat. I am looking at some intuitive 'reason' that one can give for this (that there's no way that this one metric function f(b) can be made to be 1 via a coordinate transformation), it can only be spread around to functions in the other parts of the metric. And similarly for the connection, if there is only one non zero or only one 'much larger' than all the others, which are negligibly small, as is the case for the classical limit of Schwarzschild. I just realized that my early post had some error in it, so I fix it here: Quote by Altabeh I think you mean that if there is a direct coordinates transformation to make a Minkowski spacetime out of a given metric $$g_{\mu\nu}(x^{\alpha})$$ with a non-vanishing second derivative, let n be 4 and $$\alpha=0,..,3$$, for the sake of convenience, then that spacetime is not curved, this is true in the sense that $$g_{\mu\nu}(x^{\alpha})\rightarrow \eta_{\mu\nu}$$ can be obtained through an explicit coordinates transformation $$x^{\alpha}\rightarrow \bar{x}^{\alpha}$$. For example, $$d\bar{s}^2 = d\bar{t}^2 +2\bar{x}^2d\bar{t}d\bar{x}-(1-\bar{x}^4)d\bar{x}^2,$$ has a non-vanishing second derivatives wrt $$\bar{x}$$, and it can be made Minkowski through $$\bar{x}=x$$ and $$\bar{t}=t-x^3/3$$ at every point. This last bold-face is so important in our observation of flat spacetimes and distinguishing them with the curved ones. Now according to this, I think you are able to answer the questions huddling in your mind: Quote by avirab da2 + db2 + dc2 is flat. f(b)da2 + db2 + dc2 is not flat if f(b) has a non-zero second derivative. That's it. This metric is interesting for several reasons Take the metric $$ds^2=(1+ax)^2dt^2-dx^2-dy^2-dz^2,$$ where $$a$$ is some constant. $$(1+ax)^2$$ has a non-zero second derivative wrt x, but the metric is flat!! If there are two metric functions, eg f(b)da2 + h(a)db2 + dc2, then in theory there can be coordinate transformations which make these 'cancel' each other, so that they are both 1. This is not right unless one hits It of course depends on what functions f and h are . But if h = 1, and f has a non-zero 2nd derivative, there's no way to make the metric flat. I made it flat! Look at the example above! I am looking at some intuitive 'reason' that one can give for this (that there's no way that this one metric function f(b) can be made to be 1 via a coordinate transformation), it can only be spread around to functions in the other parts of the metric. You just need to have the Riemann tensor vanished to get a flat spacetime; this can't always be understood from the metric directly. For instance, the above metric has non-vanishing Christoffel symbols but is not curved since the Riemann tensor is non-zero. These exceptional cases do not let us distinguish flat spacetimes from curved ones. In GR, one uses the geodesic coordinates system to make sure that the metric gets flat locally, because once we understand that Christoffel symbols vanish, then the Riemann tensor is necessarily zero but the inverse is not true always, as you can see! And similarly for the connection, if there is only one non zero or only one 'much larger' than all the others, which are negligibly small, as is the case for the classical limit of Schwarzschild. Only locally ture! Why don't you draw a tiny attention to my notes? I said that Schwarzschild connections (Christoffel symbols) do not vanish so the spacetime is curved everywhere and tends to be Minkowski (flat) at large distances (r-->oo) from the source; leading connections to vanish locally because they are of the dimension 1/r outside the source of gravitational field! AB Hi. I had written f in the metric instead of f2. See corrected version above. My points and question still stand. And re the connections: I am not talking of BC's at large r, or vanishing locally, I am talking of the classical limit. Quote by avirab Hi. I had written f in the metric instead of f2. See corrected version above. Now the situation is way different! The answer is simple: Looking at the Ricci tensor of your metric, you can find $$R_{00}=-f(b)\frac{d^2 f(b)}{db^2},$$ $$R_{11}=[f(b)]^{-1}\frac{d^2 f(b)}{db^2}$$ and $$R_{22}=0$$. Other components are all zero. So a space(time) of the species you've given, is flat if its Ricci tensor vanishes. Now the question is answered! But If $$g_{11}=-h(t)^2$$, then check that the space(time) is flat if $$\frac{f \left( b \right) }{h\left( a \right)} = \frac{{\frac {d^{2}h \left( a \right)}{d{a}^{2}}}}{{\frac {d^{2}f \left( b \right)}{d{b}^{2}}}}.$$ (Tip. For this case, as well, the vanishing of Ricci tensor imples the Riemann tensor = 0.) My points and question still stand. Yeah, in the case of changing the whole stuff again!! And re the connections: I am not talking of BC's at large r, or vanishing locally, I am talking of the classical limit. Back to the question, Would it be true to say that no coordinate transformation could make all the connection coefficients 'small', so that the presence of only one connection coefficient - or one very dominant one - in a particular coordinate system guarantees curvature? Not always! Because in the metric $$ds^2=(1+ax)^2dt^2-dx^2-dy^2-dz^2,$$ Assuming $$ax<<1$$, then both non-vanishing connections are equal to $$a$$, so for a large $$a$$, your assumption fails, because spacetime is flat! The only thing you have to remember is that the Riemann tensor always has to be non-zero to have a curved spacetime! AB Firstly, thanks, your point about the metric helped me spot the typo of f instead of f2. Also: re what you wrote: for ax <<1, a << 1/x, so x <<<<. But all this is irrelevant to my question: I am asking a subtle question, please try to understanding the issue even if I am not being sufficiently precise, I value your input: I am talking of a curved spacetime not a flat one, where there is effectively only one connection coefficient in the geodesic equation, as is the case for the r-geodesic equation in the classical limit of the Schwarzschild metric. For this case it is perhaps impossible to make a coordinate transformation from this situation of only one dominant/surviving connection coefficient, all the others terms are tiny in comparison due to the factors of c2, to another metric with only all very tiny connection coefficient contributions in the geodesic equation, because then the Newtonian limit acceleration would be too small, after all we know there is gravitational acceleration near the Earth surface; so the fact of the existence/survival of only one (dominant or large connection coefficient) in the geodesic equation is significant for curvature, and the functional form of this one surviving connection coefficent perhaps encodes some 'reliable' information about curvature, even without directly computing Riemann. Indeed, the GM/r2 of the connection coefficient is a real phenomenon - although the acceleration disappears in free fall, GM/r2 is certainly the correct relative acceleration between the source rest frame and the falling particle, it is not some arbitrary funciton arising from strange choice of coordinates. (Of course a relative acceleration of this type is a form of Riemann curvature measure, so it is not impossible that this connection coefficient gives information about Riemann curvature.) Maybe if there are more than one large connection coefficients they can in theory be made to cancel via a coordinate transformation, but if there is only one (or one very dominant one) it cannot cancel, and so it demonstrates something real, ie the presence of Riemann curvature. That's the point or question. And similarly for the metric: the fact that it is flat with a very simple modification, ie changing dx to f(y)dx in the metric, so there is only this one fuinction, perhaps that is why there's no coordinate transformation which can make the metric flat, as opposed to if there are two functions for example, their contributions to RIemann can cancel. Thanks. Quote by avirab Firstly, thanks, your point about the metric helped me spot the typo of f instead of f2. Also: re what you wrote: for ax <<1, a << 1/x, so x <<<<. But all this is irrelevant to my question: I am asking a subtle question, please try to understanding the issue even if I am not being sufficiently precise, I value your input: I am talking of a curved spacetime not a flat one, where there is effectively only one connection coefficient in the geodesic equation, as is the case for the r-geodesic equation in the classical limit of the Schwarzschild metric. For this case it is perhaps impossible to make a coordinate transformation from this situation of only one dominant/surviving connection coefficient, all the others terms are tiny in comparison due to the factors of c2, to another metric with only all very tiny connection coefficient contributions in the geodesic equation, because then the Newtonian limit acceleration would be too small, after all we know there is gravitational acceleration near the Earth surface; so the fact of the existence/survival of only one (dominant or large connection coefficient) in the geodesic equation is significant for curvature, and the functional form of this one surviving connection coefficent perhaps encodes some 'reliable' information about curvature, even without directly computing Riemann. Indeed, the GM/r2 of the connection coefficient is a real phenomenon - although the acceleration disappears in free fall, GM/r2 is certainly the correct relative acceleration between the source rest frame and the falling particle, it is not some arbitrary funciton arising from strange choice of coordinates. (Of course a relative acceleration of this type is a form of Riemann curvature measure, so it is not impossible that this connection coefficient gives information about Riemann curvature.) Maybe if there are more than one large connection coefficients they can in theory be made to cancel via a coordinate transformation, but if there is only one (or one very dominant one) it cannot cancel, and so it demonstrates something real, ie the presence of Riemann curvature. That's the point or question. And similarly for the metric: the fact that it is flat with a very simple modification, ie changing dx to f(y)dx in the metric, so there is only this one fuinction, perhaps that is why there's no coordinate transformation which can make the metric flat, as opposed to if there are two functions for example, their contributions to RIemann can cancel. Thanks. First of all, the largeness or smallness of a connection does not have anything to do in determining a curved or flat spacetime. So talking continuously about that the only connection is perhaps dominant or large, so the spacetime is not flat without wanting to check the Riemann tensor directly is a huge errancy! You have to know that if there is only one non-zero and non-constant connection, then the Riemann tensor is always non-zero! But if there is a couple of non-zero and non-constant connections, then the situation is a little bit different: the Riemann tensor could vanish (as in my example $$ds^2=(1+ax)^2dt^2-dx^2-dy^2-dz^2$$ because the function $$f=1+ax$$ has a vanishing second derivative)!! Note that for spacetimes with only f^2 function, there are always two non-zero connections so the vanishing of second derivative of f would imply the metric is flat otherwise it is curved. But if we have the h^2 function, too, then there are always four non-zero connections and this calls for more pedantism: Deep down, if the criterion $$\frac{f \left( b \right) }{h\left( a \right)} = \frac{{\frac {d^{2}h \left( a \right)}{d{a}^{2}}}}{{\frac {d^{2}f \left( b \right)}{d{b}^{2}}}}$$ holds, then the spacetime will be flat. Otherwise it is curved! In either cases, the spacetime being curved has nothing to do with the magniude of connections! And finally that seems so bizarre to me that you are talking about "...gravitational acceleration near the Earth surface..." while you push me away from local discussion of the problem!! AB Tags entwurf, transformations Thread Tools | | | | |--------------------------------------------------------------------------------------------|----------------------------------------|---------| | Similar Threads for: Is curvature guaranteed if only one connection coefficient is 'large' | | | | Thread | Forum | Replies | | | Science Comics | 1 | | | High Energy, Nuclear, Particle Physics | 3 | | | Academic Guidance | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 38, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281226992607117, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/128239-sine-cosine.html
# Thread: 1. ## Sine and Cosine Not sure what this question is asking me, when their is no numbers to calculate, can someone please explain this to me If Cos(x)=m then what is the value of Cos (180-x)? Thanks 2. $\cos(a-b) = \cos a \cos b + \sin a \sin b$ 3. It's a trigonometric identity. cos(180 - x) is always the opposite of cos(x). You could come to that conclusion several ways, but the easiest method is to look at a typical angle Q on a graph and compare it with its supplement (which is defined as 180 - Q), and then you will see why the conclusion is true. 4. Originally Posted by icemanfan It's a trigonometric identity. cos(180 - x) is always the opposite of cos(x). You could come to that conclusion several ways, but the easiest method is to look at a typical angle Q on a graph and compare it with its supplement (which is defined as 180 - Q), and then you will see why the conclusion is true. so the answer that i would be looking for then is -cos(180-x), or am i way out to lunch on this one 5. Originally Posted by nuckers so the answer that i would be looking for then is -cos(180-x), or am i way out to lunch on this one Well, if cos(x) = m and cos(180 - x) is the opposite of cos(x), then what does cos(180 - x) equal in terms of m? 6. Originally Posted by nuckers so the answer that i would be looking for then is -cos(180-x), or am i way out to lunch on this one if $\cos{x} = m$ , then $\cos(180-x) = -m$ ... that's all. 7. Originally Posted by icemanfan Well, if cos(x) = m and cos(180 - x) is the opposite of cos(x), then what does cos(180 - x) equal in terms of m? Ok, i understand now, thanks so much for the help
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9601726531982422, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/83628/list
## Return to Question Post Closed as "too localized" by Felipe Voloch, Dror Speiser, fedja, Igor Rivin, Emil Jeřábek 3 \$ "On a Diophantine Equation" paper of Erdös, at some point it is said that it is well known that $C(n,2)=x^2$ has infinitely many integer solutions. I am just wondering the formula generating all possible $n \geq 2$,because I wonder whether they all come by integer solutions of $2x^2-y^2=\pm 1$ Pell's equation generated by fundamental unit, or is it possible to have different $n$'s that do not come from the solutions of the pell's equation. For example $C(2,2)=1$ a square, $C(9,2)=36$ a square too, so the first two $n$'s are 2,9. One can also see from the solutions of $2x^2-y^2=\pm 1$ pell's equation, by take $x=1, y=0$ for $n=2=2*1$ and take $x=2,$y=3$for$n=9=3^2\$. 2 cleaned up, tex, added tag "On a Diophantine Equation" paper of Erdös, at some point it is said that it is well known that C(n,2)=x^2 $C(n,2)=x^2$ has infinitely many integer solutions. I am just wondering the formula generating all possible n>=2,because $n \geq 2$,because I wonder whether they all come by integer solutions of 2x^2-y^2=-+1 pell's $2x^2-y^2=\pm 1$ Pell's equation generated by fundamental unit, or is it possible to have different n's $n$'s that does do not come from the solutions of the pell's equation. For example C(2,2)=1 $C(2,2)=1$ a square, C(9,2)=36 $C(9,2)=36$ a square too, so the first two n's $n$'s are 2,9. One can also see from the solutions of 2x^2-y^2=-+1 $2x^2-y^2=\pm 1$ pell's equation, by take x=1, y=0 $x=1, y=0$ for n=2=2*1 $n=2=2*1$ and take x=2 y=3 $x=2,$y=3\$ for n=9=3^2, Thanks,$n=9=3^2$. 1 # Explicit solutions of C(n,2)=x^2 ? "On a Diophantine Equation" paper of Erdös, at some point it is said that it is well known that C(n,2)=x^2 has infinitely many integer solutions. I am just wondering the formula generating all possible n>=2,because I wonder whether they all come by integer solutions of 2x^2-y^2=-+1 pell's equation generated by fundamental unit, or is it possible to have different n's that does not come from the solutions of the pell's equation. For example C(2,2)=1 a square, C(9,2)=36 a square too, so the first two n's are 2,9 One can also see from the solutions of 2x^2-y^2=-+1 pell's equation, by take x=1, y=0 for n=2=2*1 and take x=2 y=3 for n=9=3^2, Thanks,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396610856056213, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/47135-chapter-one-calculus-question.html
# Thread: 1. ## Chapter One Calculus Question This is due monday... The entirety of information I have is as follows: lim 6= x-->3 That's it. No instructions. There's about six of these. How am I supposed to handle them? 2. Originally Posted by Oscar201 This is due monday... The entirety of information I have is as follows: lim 6= x-->3 That's it. No instructions. There's about six of these. How am I supposed to handle them? It's expected that you know that the limiting value of a constant for any value of x is just the value of the constant: $\lim_{x \rightarrow a} C = C$. If you're being asked to solve questions on limits, I would assume you've been taught something about them ....? 3. No, actually, I wasn't. The teacher said that we would start on limits today, but spent the entire period reviewing pre-calc homework. Then, when she ran out of time, she assigned limit homework anyway. 4. Originally Posted by Oscar201 No, actually, I wasn't. The teacher said that we would start on limits today, but spent the entire period reviewing pre-calc homework. Then, when she ran out of time, she assigned limit homework anyway. Don't take it the wrong way, but it wouldn't hurt for you to open your textbook and read through the section and examples of said problems. Think about it...why would you want to come ask a question, wait 5-10 minutes, then get an answer, when you can just go through the section in your textbook and find your answer there in half the time or less? By logic, it would make more sense to check your book first. I know that the constant rule for limits is in there somewhere. I own six different calculus books, and I can find that rule sticking its nose out in the open in every, single one of them. It's an important life skill. Sometimes you won't have us to help you: it may be just you, your lecture notes, and your textbook(s). Get comfortable with your textbook. A LOT of high schoolers (including me at one point) saw math textbooks as just books full of exercises, and we highly underestimate its capabilities in assisting us. It wasn't until I started college two weeks ago that I began to recognize math textbooks as the true treasure chests they are. It has provided invaluable assistance in many things that I have had questions about, and has saved me countless hours of constant Q & A sessions with people on here, my classmates, as well as my professor. I've actually been able to tutor some of my classmates on concepts from the book that they didn't quite get. I'm a math major as well as the biggest math nerd you have ever met in your life. Maybe these things I'm saying to you comes naturally to me, and not to other people. That's fine. I've come to accept that not everybody loves math like I do, nor do they have that gift for it. But what I have said to you this evening can not only help you in math, it can help you out in your other courses, in college, graduate school, your job, wherever you could possibly go. Hopefully you'll get something out of my little rant for this evening. I wish you the best on your future endeavors, and we'll always be happy to assist you should the need arise. 5. I have to agree with the above post. Throughout my schooling, I noticed that so many students didn't even bother to glance at the lesson sections within math books. If a problem stumped them, they would either immediately run to somebody else for help, or just give up. I learned early on in my studies of math that sometimes you just have to try and teach yourself a little. Many of my teachers were very surprised when I mentioned that I actually read the book. 6. Originally Posted by Oscar201 No, actually, I wasn't. The teacher said that we would start on limits today, but spent the entire period reviewing pre-calc homework. Then, when she ran out of time, she assigned limit homework anyway.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9831185340881348, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Rotation_matrix
# Rotation matrix In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix $R = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \\ \end{bmatrix}$ rotates points in the xy-Cartesian plane counter-clockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation using a rotation matrix R, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv. Since matrix multiplication has no effect on the zero vector (i.e., on the coordinates of the origin), rotation matrices can only be used to describe rotations about the origin of the coordinate system. Rotation matrices provide a simple algebraic description of such rotations, and are used extensively for computations in geometry, physics, and computer graphics. In two-dimensional space, a rotation can be simply described by an angle θ of rotation, but it can also be represented by the four entries of a rotation matrix with two rows and two columns. In three-dimensional space, every rotation can be interpreted as a rotation by a given angle about a single fixed axis of rotation (see Euler's rotation theorem), and hence it can be simply described by an angle and a vector with three entries. However, it can also be represented by the nine entries of a rotation matrix with three rows and three columns. The notion of rotation is not commonly used in dimensions higher than 3; there is a notion of a rotational displacement, which can be represented by a matrix, but not associated single axis or angle. Rotation matrices are square matrices, with real entries. More specifically they can be characterized as orthogonal matrices with determinant 1: $R^{T} = R^{-1}, \det R = 1\,$. The set of all such matrices of size n forms a (generally not commutative) group, known as the special orthogonal group SO(n). One can interpret the row vectors of a rotation matrix as unit vectors along the axes of the rotated space, in the original coordinate system. Likewise, the column vectors represent unit vectors along the axes of the original space, in the rotated coordinate system. (Pique 1990) In some literature, the term rotation is generalized to include improper rotations, characterized by orthogonal matrices with determinant -1 (instead of +1). These combine proper rotations with reflections (which invert orientation). In this sense the members of the (general) orthogonal group O(n) may also be called rotation matrices. In other cases, where reflections are not being considered, the label proper may be dropped unless demands of clarity make it wise to specify it. This is the reason it is more frequent to associate the term rotation matrix with members of SO(n) (as in the rest of this article) while it may also be found referring to members of O(n). ## In two dimensions A counterclockwise rotation of a vector through angle θ. The vector is initially aligned with the x-axis. In two dimensions every rotation matrix has the following form: $R(\theta) = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \\ \end{bmatrix}$. This rotates column vectors by means of the following matrix multiplication: $\begin{bmatrix} x' \\ y' \\ \end{bmatrix} = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \\ \end{bmatrix}\begin{bmatrix} x \\ y \\ \end{bmatrix}$. So the coordinates (x',y') of the point (x,y) after rotation are: $x' = x \cos \theta - y \sin \theta\,$, $y' = x \sin \theta + y \cos \theta\,$. The direction of vector rotation is counterclockwise if θ is positive (e.g. 90°), and clockwise if θ is negative (e.g. -90°). Thus the clockwise rotation matrix is found as: $R(-\theta) = \begin{bmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \\ \end{bmatrix}\,$. Note that the two-dimensional case is the only non-trivial (e.g. one dimension) case where the rotation matrices group is commutative, so that it does not matter the order in which multiple rotations are performed. ### Non-standard orientation of the coordinate system A rotation through angle θ with non-standard axes. If a standard right-handed Cartesian coordinate system is used, with the x axis to the right and the y axis up, the rotation R(θ) is counterclockwise. If a left-handed Cartesian coordinate system is used, with x directed to the right but y directed down, R(θ) is clockwise. Such non-standard orientations are rarely used in mathematics but are common in 2D computer graphics, which often have the origin in the top left corner and the y-axis down the screen or page.[1] See below for other alternative conventions which may change the sense of the rotation produced by a rotation matrix. ### Common rotations Particularly useful are the matrices for 90° and 180° rotations: $R(90^\circ) = \begin{bmatrix} 0 & -1 \\[3pt] 1 & 0 \\ \end{bmatrix}$ (90° counterclockwise rotation) $R(180^\circ) = \begin{bmatrix} -1 & 0 \\[3pt] 0 & -1 \\ \end{bmatrix}$ (180° rotation in either direction – a half-turn) $R(270^\circ) = \begin{bmatrix} 0 & 1 \\[3pt] -1 & 0 \\ \end{bmatrix}$ (270° counterclockwise rotation, the same as a 90° clockwise rotation) ## In three dimensions ### Basic rotations The following three basic (gimbal-like) rotation matrices rotate vectors about the x, y, or z axis, in three dimensions: $\begin{alignat}{1} R_x(\theta) &= \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos \theta & -\sin \theta \\[3pt] 0 & \sin \theta & \cos \theta \\[3pt] \end{bmatrix} \\[6pt] R_y(\theta) &= \begin{bmatrix} \cos \theta & 0 & \sin \theta \\[3pt] 0 & 1 & 0 \\[3pt] -\sin \theta & 0 & \cos \theta \\ \end{bmatrix} \\[6pt] R_z(\theta) &= \begin{bmatrix} \cos \theta & -\sin \theta & 0 \\[3pt] \sin \theta & \cos \theta & 0\\[3pt] 0 & 0 & 1\\ \end{bmatrix} \end{alignat}$ For column vectors, each of these basic vector rotations appears counter-clockwise when the axis about which they occur points toward the observer, and the coordinate system is right-handed. Rz, for instance, would rotate toward the y-axis a vector aligned with the x-axis. This is similar to the rotation produced by the above mentioned 2-D rotation matrix. See below for alternative conventions which may apparently or actually invert the sense of the rotation produced by these matrices. ### General rotations Other rotation matrices can be obtained from these three using matrix multiplication. For example, the product $R = R_x(\gamma) \, R_y(\beta) \, R_z(\alpha)\,\!$ represents a rotation whose yaw, pitch, and roll angles are α, β, and γ, respectively. More formally, it is a rotation whose Euler angles are α, β, and γ (using the z-y-x convention for Euler angles). Similarly, the product $R = R_z(\gamma) \, R_x(\beta) \, R_y(\alpha)\,\!$ represents a rotation whose Euler angles are α, β, and γ (using the y-x-z convention for Euler angles). These matrices produce the desired effect only if they are used to pre-multiply column vectors (see Ambiguities for more details). ### Conversion from and to axis-angle Every rotation in three dimensions is defined by its axis — a direction that is left fixed by the rotation — and its angle — the amount of rotation about that axis (Euler rotation theorem). There are several methods to compute an axis and an angle from a rotation matrix (see also axis-angle). Here, we only describe the method based on the computation of the eigenvectors and eigenvalues of the rotation matrix. It is also possible to use the trace of the rotation matrix. #### Determining the axis A rotation R around axis u can be decomposed using 3 endomorphisms P, (I - P), and Q (click to enlarge). Given a rotation matrix R, a vector u parallel to the rotation axis must satisfy $R\textbf{u} = \textbf{u}$ since the rotation of $\textbf{u}$ around the rotation axis must result in $\textbf{u}$. The equation above may be solved for $\textbf{u}$ which is unique up to a scalar factor. Further, the equation may be rewritten $R\textbf{u} = I \textbf{u} \quad \Rightarrow \quad (R - I) \textbf{u} = 0$ which shows that $\textbf{u}$ is the null space of $R - I$. Viewed another way, $\textbf{u}$ is an eigenvector of R corresponding to the eigenvalue $\lambda = 1$ (every rotation matrix must have this eigenvalue). #### Determining the angle To find the angle of a rotation, once the axis of the rotation is known, select a vector $\textbf{v}$ perpendicular to the axis. Then the angle of the rotation is the angle between $\textbf{v}$ and $R\textbf{v}$. A much easier method, however, is to calculate the trace (i.e. the sum of the diagonal elements of the rotation matrix) which is $1+2\cos\theta$. #### Rotation matrix from axis and angle For some applications, it is helpful to be able to make a rotation with a given axis. Given a unit vector u = (ux, uy, uz), where ux2 + uy2 + uz2 = 1, the matrix for a rotation by an angle of θ about an axis in the direction of u is $R = \begin{bmatrix} \cos \theta +u_x^2 \left(1-\cos \theta\right) & u_x u_y \left(1-\cos \theta\right) - u_z \sin \theta & u_x u_z \left(1-\cos \theta\right) + u_y \sin \theta \\ u_y u_x \left(1-\cos \theta\right) + u_z \sin \theta & \cos \theta + u_y^2\left(1-\cos \theta\right) & u_y u_z \left(1-\cos \theta\right) - u_x \sin \theta \\ u_z u_x \left(1-\cos \theta\right) - u_y \sin \theta & u_z u_y \left(1-\cos \theta\right) + u_x \sin \theta & \cos \theta + u_z^2\left(1-\cos \theta\right) \end{bmatrix}.$[2] This can be written more concisely as $R = I\cos\theta + \sin\theta[\mathbf u]_{\times} + (1-\cos\theta)\mathbf{u}\otimes\mathbf{u},$ where $[\mathbf u]_{\times}$ is the cross product matrix of u, ⊗ is the tensor product and I is the Identity matrix. This is a matrix form of Rodrigues' rotation formula, with $\mathbf{u}\otimes\mathbf{u} = \begin{bmatrix} u_x^2 & u_x u_y & u_x u_z \\[3pt] u_x u_y & u_y^2 & u_y u_z \\[3pt] u_x u_z & u_y u_z & u_z^2 \end{bmatrix},\qquad [\mathbf u]_{\times} = \begin{bmatrix} 0 & -u_z & u_y \\[3pt] u_z & 0 & -u_x \\[3pt] -u_y & u_x & 0 \end{bmatrix}.$ If the 3D space is right-handed, this rotation will be counterclockwise for an observer placed so that the axis u goes in his direction (Right-hand rule). ## Properties of a rotation matrix In three dimensions, for any rotation matrix $R_{a,\theta} \in \mathbb{R}^3$, where a is a rotation axis and θ a rotation angle, • $R_{a, \theta}^T = R_{a, \theta}^{-1}$ (i.e., $R_{a,\theta} \$ is an orthogonal matrix) • $\det\left(R_{a, \theta}\right) = 1$ (i.e, the determinant of $R_{a,\theta} \$ is 1) • $R_{a, (\theta + r)} = R_{a, \theta} \cdot R_{a,r}$ • $R_{a, 0} = I \$ (where $I \in \mathbb{R}^n$ is the identity matrix) • The eigenvalues of $R_{a,\theta} \$ are $\{1, e^{\pm i\theta} \} = \{1,\ \cos(\theta)+i\sin(\theta),\ \cos(\theta)-i\sin(\theta)\},$ where i is the standard imaginary unit with the property $i^2 = -1$ • The trace of $R_{a,\theta} \$ is $1 + 2\cos(\theta) \,$ equivalent to the sum of its eigenvalues. Some of these properties can be generalised to any number of dimensions. In other words, they hold for any rotation matrix $R_{a,\theta} \in \mathbb{R}^n$. For instance, in two dimensions the properties hold with the following exceptions: • a is not a given axis, but a point (rotation center) which must coincide with the origin of the coordinate system in which the rotation is represented. • Consequently, the four elements of the rotation matrix depend only on θ, hence we write $R_{\theta} \$, rather than $R_{a,\theta} \$ • The eigenvalues of $R_{\theta} \$ are $\{e^{\pm i\theta} \} = \{\cos(\theta)+i\sin(\theta),\ \cos(\theta)-i\sin(\theta)\}.$ • The trace of $R_{\theta} \$ is $2\cos(\theta) \,$ equivalent to the sum of its eigenvalues. ## Examples The 2×2 rotation matrix $Q = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}$ corresponds to a 90° planar rotation. The transpose of the 2×2 matrix $M = \begin{bmatrix} 0.936 & 0.352 \\ 0.352 & -0.936 \end{bmatrix}$ is its inverse, but since its determinant is −1 this is not a rotation matrix; it is a reflection across the line 11y = 2x. The 3×3 rotation matrix $Q = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \frac{\sqrt{3}}{2} & \frac12 \\ 0 & -\frac12 & \frac{\sqrt{3}}{2} \end{bmatrix}$ corresponds to a −30° rotation around the x axis in three-dimensional space. The 3×3 rotation matrix $Q = \begin{bmatrix} 0.36 & 0.48 & -0.8 \\ -0.8 & 0.60 & 0 \\ 0.48 & 0.64 & 0.60 \end{bmatrix}$ corresponds to a rotation of approximately -74° around the axis (−1⁄3,2⁄3,2⁄3) in three-dimensional space. The 3×3 permutation matrix $P = \begin{bmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}$ is a rotation matrix, as is the matrix of any even permutation, and rotates through 120° about the axis x = y = z. The 3×3 matrix $M = \begin{bmatrix} 3 & -4 & 1 \\ 5 & 3 & -7 \\ -9 & 2 & 6 \end{bmatrix}$ has determinant +1, but its transpose is not its inverse, so it is not a rotation matrix. The 4×3 matrix $M = \begin{bmatrix} 0.5 & -0.1 & 0.7 \\ 0.1 & 0.5 & -0.5 \\ -0.7 & 0.5 & 0.5 \\ -0.5 & -0.7 & -0.1 \end{bmatrix}$ is not square, and so cannot be a rotation matrix; yet MTM yields a 3×3 identity matrix (the columns are orthonormal). The 4×4 matrix $Q = \begin{bmatrix} -1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{bmatrix}$ describes an isoclinic rotation, a rotation through equal angles (180°) through two orthogonal planes. The 5×5 rotation matrix $Q = \begin{bmatrix} 0 & -1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix}$ rotates vectors in the plane of the first two coordinate axes 90°, rotates vectors in the plane of the next two axes 180°, and leaves the last coordinate axis unmoved. ## Geometry In Euclidean geometry, a rotation is an example of an isometry, a transformation that moves points without changing the distances between them. Rotations are distinguished from other isometries by two additional properties: they leave (at least) one point fixed, and they leave "handedness" unchanged. By contrast, a translation moves every point, a reflection exchanges left- and right-handed ordering, and a glide reflection does both. A rotation that does not leave "handedness" unchanged is an improper rotation or a rotoinversion. If we take the fixed point as the origin of a Cartesian coordinate system, then every point can be given coordinates as a displacement from the origin. Thus we may work with the vector space of displacements instead of the points themselves. Now suppose (p1,…,pn) are the coordinates of the vector p from the origin, O, to point P. Choose an orthonormal basis for our coordinates; then the squared distance to P, by Pythagoras, is $d^2(O,P) = \| \bold{p} \|^2 = \sum_{r=1}^n p_r^2$ which we can compute using the matrix multiplication $\| \bold{p} \|^2 = \begin{bmatrix}p_1 \cdots p_n\end{bmatrix} \begin{bmatrix}p_1 \\ \vdots \\ p_n \end{bmatrix} = \bold{p}^T \bold{p} .$ A geometric rotation transforms lines to lines, and preserves ratios of distances between points. From these properties we can show that a rotation is a linear transformation of the vectors, and thus can be written in matrix form, Qp. The fact that a rotation preserves, not just ratios, but distances themselves, we can state as $\bold{p}^T \bold{p} = (Q \bold{p})^T (Q \bold{p}) , \,\!$ or $\begin{align} \bold{p}^T I \bold{p}&{}= (\bold{p}^T Q^T) (Q \bold{p}) \\ &{}= \bold{p}^T (Q^T Q) \bold{p} . \end{align}$ Because this equation holds for all vectors, p, we conclude that every rotation matrix, Q, satisfies the orthogonality condition, $Q^T Q = I . \,\!$ Rotations preserve handedness because they cannot change the ordering of the axes, which implies the special matrix condition, $\det Q = +1 . \,\!$ Equally important, we can show that any matrix satisfying these two conditions acts as a rotation. ## Multiplication The inverse of a rotation matrix is its transpose, which is also a rotation matrix: $\begin{align} (Q^T)^T (Q^T) &{}= Q Q^T = I\\ \det Q^T &{}= \det Q = +1. \end{align}$ The product of two rotation matrices is a rotation matrix: $\begin{align} (Q_1 Q_2)^T (Q_1 Q_2) &{}= Q_2^T (Q_1^T Q_1) Q_2 = I \\ \det (Q_1 Q_2) &{}= (\det Q_1) (\det Q_2) = +1. \end{align}$ For n greater than 2, multiplication of n×n rotation matrices is not commutative. $\begin{align} Q_1 &{}= \begin{bmatrix}0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{bmatrix} & Q_2 &{}= \begin{bmatrix}0 & 0 & 1 \\ 0 & 1 & 0 \\ -1 & 0 & 0\end{bmatrix} \\ Q_1 Q_2 &{}= \begin{bmatrix}0 & -1 & 0 \\ 0 & 0 & 1 \\ -1 & 0 & 0\end{bmatrix} & Q_2 Q_1 &{}= \begin{bmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{bmatrix}. \end{align}$ Noting that any identity matrix is a rotation matrix, and that matrix multiplication is associative, we may summarize all these properties by saying that the n×n rotation matrices form a group, which for n > 2 is non-abelian. Called a special orthogonal group, and denoted by SO(n), SO(n,R), SOn, or SOn(R), the group of n×n rotation matrices is isomorphic to the group of rotations in an n-dimensional space. This means that multiplication of rotation matrices corresponds to composition of rotations, applied in left-to-right order of their corresponding matrices. ## Ambiguities Alias and alibi rotations The interpretation of a rotation matrix can be subject to many ambiguities. Alias or alibi (passive or active) transformation The coordinates of a point P may change due to either a rotation of the coordinate system CS (alias), or a rotation of the point P (alibi). In the latter case, the rotation of P also produces a rotation of the vector v representing P. In other words, either P and v are fixed while CS rotates (alias), or CS is fixed while P and v rotate (alibi). Any given rotation can be legitimately described both ways, as vectors and coordinate systems actually rotate with respect to each other, about the same axis but in opposite directions. Throughout this article, we chose the alibi approach to describe rotations. For instance, $R(\theta) = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \\ \end{bmatrix}$ represents a counterclocwise rotation of a vector v by an angle θ, or a rotation of CS by the same angle but in the opposite direction (i.e. clockwise). Alibi and alias transformations are also known as active and passive transformations, respectively. Pre-multiplication or post-multiplication The same point P can be represented either by a column vector v or a row vector w. Rotation matrices can either pre-multiply column vectors (Rv), or post-multiply row vectors (wR). However, Rv produces a rotation in the opposite direction with respect to wR. Throughout this article, we described rotations produced on column vectors by means of a pre-multiplication. To obtain exactly the same rotation (i.e. the same final coordinates of point P), the row vector must be post-multiplied by the transpose of R (wRT). Right- or left-handed coordinates The matrix and the vector can be represented with respect to a right-handed or left-handed coordinate system. Throughout the article, we assumed a right-handed orientation, unless otherwise specified. Vectors or forms The vector space has a dual space of linear forms, and the matrix can act on either vectors or forms. In most cases the effect of the ambiguity is equivalent to the effect of a transposition of the rotation matrix. ## Decompositions ### Independent planes Consider the 3×3 rotation matrix $Q = \begin{bmatrix} 0.36 & 0.48 & -0.8 \\ -0.8 & 0.60 & 0 \\ 0.48 & 0.64 & 0.60 \end{bmatrix} .$ If Q acts in a certain direction, v, purely as a scaling by a factor λ, then we have $Q \bold{v} = \lambda \bold{v}, \,\!$ so that $\bold{0} = (\lambda I - Q) \bold{v} . \,\!$ Thus λ is a root of the characteristic polynomial for Q, $\begin{align} 0 &{}= \det (\lambda I - Q) \\ &{}= \lambda^3 - \tfrac{39}{25} \lambda^2 + \tfrac{39}{25} \lambda - 1 \\ &{}= (\lambda-1) (\lambda^2 - \tfrac{14}{25} \lambda + 1). \end{align}$ Two features are noteworthy. First, one of the roots (or eigenvalues) is 1, which tells us that some direction is unaffected by the matrix. For rotations in three dimensions, this is the axis of the rotation (a concept that has no meaning in any other dimension). Second, the other two roots are a pair of complex conjugates, whose product is 1 (the constant term of the quadratic), and whose sum is 2 cos θ (the negated linear term). This factorization is of interest for 3×3 rotation matrices because the same thing occurs for all of them. (As special cases, for a null rotation the "complex conjugates" are both 1, and for a 180° rotation they are both −1.) Furthermore, a similar factorization holds for any n×n rotation matrix. If the dimension, n, is odd, there will be a "dangling" eigenvalue of 1; and for any dimension the rest of the polynomial factors into quadratic terms like the one here (with the two special cases noted). We are guaranteed that the characteristic polynomial will have degree n and thus n eigenvalues. And since a rotation matrix commutes with its transpose, it is a normal matrix, so can be diagonalized. We conclude that every rotation matrix, when expressed in a suitable coordinate system, partitions into independent rotations of two-dimensional subspaces, at most n⁄2 of them. The sum of the entries on the main diagonal of a matrix is called the trace; it does not change if we reorient the coordinate system, and always equals the sum of the eigenvalues. This has the convenient implication for 2×2 and 3×3 rotation matrices that the trace reveals the angle of rotation, θ, in the two-dimensional (sub-)space. For a 2×2 matrix the trace is 2 cos(θ), and for a 3×3 matrix it is 1+2 cos(θ). In the three-dimensional case, the subspace consists of all vectors perpendicular to the rotation axis (the invariant direction, with eigenvalue 1). Thus we can extract from any 3×3 rotation matrix a rotation axis and an angle, and these completely determine the rotation. ### Sequential angles The constraints on a 2×2 rotation matrix imply that it must have the form $Q = \begin{bmatrix} a & -b \\ b & a \end{bmatrix}$ with a2+b2 = 1. Therefore we may set a = cos θ and b = sin θ, for some angle θ. To solve for θ it is not enough to look at a alone or b alone; we must consider both together to place the angle in the correct quadrant, using a two-argument arctangent function. Now consider the first column of a 3×3 rotation matrix, $\begin{bmatrix}a\\b\\c\end{bmatrix} .$ Although a2+b2 will probably not equal 1, but some value r2 < 1, we can use a slight variation of the previous computation to find a so-called Givens rotation that transforms the column to $\begin{bmatrix}r\\0\\c\end{bmatrix} ,$ zeroing b. This acts on the subspace spanned by the x and y axes. We can then repeat the process for the xz subspace to zero c. Acting on the full matrix, these two rotations produce the schematic form $Q_{xz}Q_{xy}Q = \begin{bmatrix}1&0&0\\0&\ast&\ast\\0&\ast&\ast\end{bmatrix} .$ Shifting attention to the second column, a Givens rotation of the yz subspace can now zero the z value. This brings the full matrix to the form $Q_{yz}Q_{xz}Q_{xy}Q = \begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix} ,$ which is an identity matrix. Thus we have decomposed Q as $Q = Q_{xy}^{-1}Q_{xz}^{-1}Q_{yz}^{-1} .$ An n×n rotation matrix will have (n−1)+(n−2)+⋯+2+1, or $\sum_{k=1}^{n-1} k = \frac{n(n-1)}{2} \,\!$ entries below the diagonal to zero. We can zero them by extending the same idea of stepping through the columns with a series of rotations in a fixed sequence of planes. We conclude that the set of n×n rotation matrices, each of which has n2 entries, can be parameterized by n(n−1)/2 angles. | | | | | |------|------|------|------| | xzxw | xzyw | xyxw | xyzw | | yxyw | yxzw | yzyw | yzxw | | zyzw | zyxw | zxzw | zxyw | | xzxb | yzxb | xyxb | zyxb | | yxyb | zxyb | yzyb | xzyb | | zyzb | xyzb | zxzb | yxzb | In three dimensions this restates in matrix form an observation made by Euler, so mathematicians call the ordered sequence of three angles Euler angles. However, the situation is somewhat more complicated than we have so far indicated. Despite the small dimension, we actually have considerable freedom in the sequence of axis pairs we use; and we also have some freedom in the choice of angles. Thus we find many different conventions employed when three-dimensional rotations are parameterized for physics, or medicine, or chemistry, or other disciplines. When we include the option of world axes or body axes, 24 different sequences are possible. And while some disciplines call any sequence Euler angles, others give different names (Euler, Cardano, Tait-Bryan, Roll-pitch-yaw) to different sequences. One reason for the large number of options is that, as noted previously, rotations in three dimensions (and higher) do not commute. If we reverse a given sequence of rotations, we get a different outcome. This also implies that we cannot compose two rotations by adding their corresponding angles. Thus Euler angles are not vectors, despite a similarity in appearance as a triple of numbers. ### Nested dimensions A 3×3 rotation matrix like $Q_{3 \times 3} = \begin{bmatrix}\cos \theta & \sin \theta & {\color{CadetBlue}0} \\ -\sin \theta & \cos \theta & {\color{CadetBlue}0} \\ {\color{CadetBlue}0} & {\color{CadetBlue}0} & {\color{CadetBlue}1}\end{bmatrix}$ suggests a 2×2 rotation matrix, $Q_{2 \times 2} = \begin{bmatrix}\cos \theta & \sin \theta \\ -\sin \theta & \cos \theta\end{bmatrix} ,$ is embedded in the upper left corner: $Q_{3 \times 3} = \left[ \begin{matrix} Q_{2 \times 2} & \bold{0} \\ \bold{0}^T & 1 \end{matrix} \right] .$ This is no illusion; not just one, but many, copies of n-dimensional rotations are found within (n+1)-dimensional rotations, as subgroups. Each embedding leaves one direction fixed, which in the case of 3×3 matrices is the rotation axis. For example, we have $Q_{\bold{x}}(\theta) = \begin{bmatrix}1 & 0 & 0 \\ 0 & \cos \theta & \sin \theta \\ 0 & -\sin \theta & \cos \theta\end{bmatrix} ,$ $Q_{\bold{y}}(\theta) = \begin{bmatrix}\cos \theta & 0 & -\sin \theta \\ 0 & 1 & 0 \\ \sin \theta & 0 & \cos \theta\end{bmatrix} ,$ $Q_{\bold{z}}(\theta) = \begin{bmatrix}\cos \theta & \sin \theta & 0 \\ -\sin \theta & \cos \theta & 0 \\ 0 & 0 & 1\end{bmatrix} ,$ fixing the x axis, the y axis, and the z axis, respectively. The rotation axis need not be a coordinate axis; if u = (x,y,z) is a unit vector in the desired direction, then $\begin{align} Q_{\bold{u}}(\theta) &{}= \begin{bmatrix} 0&-z&y\\ z&0&-x\\ -y&x&0 \end{bmatrix} \sin \theta + (I - \bold{u}\bold{u}^T) \cos \theta + \bold{u}\bold{u}^T \\ &{}= \begin{bmatrix} (1-x^2) c_{\theta} + x^2 & - z s_{\theta} - x y c_{\theta} + x y & y s_{\theta} - x z c_{\theta} + x z \\ z s_{\theta} - x y c_{\theta} + x y & (1-y^2) c_{\theta} + y^2 & -x s_{\theta} - y z c_{\theta} + y z \\ -y s_{\theta} - x z c_{\theta} + x z & x s_{\theta} - y z c_{\theta} + y z & (1-z^2) c_{\theta} + z^2 \end{bmatrix} \\ &{}= \begin{bmatrix} x^2 (1-c_{\theta}) + c_{\theta} & x y (1-c_{\theta}) - z s_{\theta} & x z (1-c_{\theta}) + y s_{\theta} \\ x y (1-c_{\theta}) + z s_{\theta} & y^2 (1-c_{\theta}) + c_{\theta} & y z (1-c_{\theta}) - x s_{\theta} \\ x z (1-c_{\theta}) - y s_{\theta} & y z (1-c_{\theta}) + x s_{\theta} & z^2 (1-c_{\theta}) + c_{\theta} \end{bmatrix} , \end{align}$ where cθ = cos θ, sθ = sin θ, is a rotation by angle θ leaving axis u fixed. A direction in (n+1)-dimensional space will be a unit magnitude vector, which we may consider a point on a generalized sphere, Sn. Thus it is natural to describe the rotation group SO(n+1) as combining SO(n) and Sn. A suitable formalism is the fiber bundle, $SO(n) \hookrightarrow SO(n+1) \to S^n , \,\!$ where for every direction in the "base space", Sn, the "fiber" over it in the "total space", SO(n+1), is a copy of the "fiber space", SO(n), namely the rotations that keep that direction fixed. Thus we can build an n×n rotation matrix by starting with a 2×2 matrix, aiming its fixed axis on S2 (the ordinary sphere in three-dimensional space), aiming the resulting rotation on S3, and so on up through Sn−1. A point on Sn can be selected using n numbers, so we again have n(n−1)/2 numbers to describe any n×n rotation matrix. In fact, we can view the sequential angle decomposition, discussed previously, as reversing this process. The composition of n−1 Givens rotations brings the first column (and row) to (1,0,…,0), so that the remainder of the matrix is a rotation matrix of dimension one less, embedded so as to leave (1,0,…,0) fixed. ### Skew parameters via Cayley's formula Main article: Skew-symmetric matrix When an n×n rotation matrix, Q, does not include −1 as an eigenvalue, so that none of the planar rotations of which it is composed are 180° rotations, then Q+I is an invertible matrix. Most rotation matrices fit this description, and for them we can show that (Q−I)(Q+I)−1 is a skew-symmetric matrix, A. Thus AT = −A; and since the diagonal is necessarily zero, and since the upper triangle determines the lower one, A contains n(n−1)/2 independent numbers. Conveniently, I−A is invertible whenever A is skew-symmetric; thus we can recover the original matrix using the Cayley transform, $A \mapsto (I+A)(I-A)^{-1} , \,\!$ which maps any skew-symmetric matrix A to a rotation matrix. In fact, aside from the noted exceptions, we can produce any rotation matrix in this way. Although in practical applications we can hardly afford to ignore 180° rotations, the Cayley transform is still a potentially useful tool, giving a parameterization of most rotation matrices without trigonometric functions. In three dimensions, for example, we have (Cayley 1846) $\begin{align} &\begin{bmatrix}0&-z&y\\z&0&-x\\-y&x&0\end{bmatrix} \mapsto {} \\ &\quad \frac{1}{1+x^2+y^2+z^2} \begin{bmatrix} 1+x^2-y^2-z^2 & 2 x y-2 z & 2 y+2 x z \\ 2 x y+2 z & 1-x^2+y^2-z^2 & 2 y z-2 x \\ 2 x z-2 y & 2 x+2 y z & 1-x^2-y^2+z^2 \end{bmatrix} . \end{align}$ If we condense the skew entries into a vector, (x,y,z), then we produce a 90° rotation around the x axis for (1,0,0), around the y axis for (0,1,0), and around the z axis for (0,0,1). The 180° rotations are just out of reach; for, in the limit as x goes to infinity, (x,0,0) does approach a 180° rotation around the x axis, and similarly for other directions. ### Decomposition into shears For the 2D case, a rotation matrix can be decomposed into three shear matrices (Paeth 1986): $\begin{align} R(\theta) &{}= \begin{bmatrix} 1 & -\tan (\theta/2)\\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0\\ \sin \theta & 1 \end{bmatrix} \begin{bmatrix} 1 & -\tan (\theta/2)\\ 0 & 1 \end{bmatrix} \end{align}$ This is useful, for instance, in computer graphics, since shears can be implemented with fewer multiplication instructions than rotating a bitmap directly. On modern computers, this may not matter, but it can be relevant for very old or low-end microprocessors. ## Lie theory ### Lie group We have established that n×n rotation matrices form a group, the special orthogonal group, SO(n). This algebraic structure is coupled with a topological structure, in that the operations of multiplication and taking the inverse (which here is merely transposition) are continuous functions of the matrix entries. Thus SO(n) is a classic example of a topological group. (In purely topological terms, it is a compact manifold.) Furthermore, the operations are not only continuous, but smooth, so SO(n) is a differentiable manifold and a Lie group (Baker (2003); Fulton & Harris (1991)). Most properties of rotation matrices depend very little on the dimension, n; yet in Lie group theory we see systematic differences between even dimensions and odd dimensions. As well, there are some irregularities below n = 5; for example, SO(4) is, anomalously, not a simple Lie group, but instead isomorphic to the product of S3 and SO(3). ### Lie algebra Associated with every Lie group is a Lie algebra, a linear space equipped with a bilinear alternating product called a bracket. The algebra for SO(n) is denoted by $\mathfrak{so}(n) , \,\!$ and consists of all skew-symmetric n×n matrices (as implied by differentiating the orthogonality condition, I = QTQ). The bracket, [A1,A2], of two skew-symmetric matrices is defined to be A1A2−A2A1, which is again a skew-symmetric matrix. This Lie algebra bracket captures the essence of the Lie group product via infinitesimals. For 2×2 rotation matrices, the Lie algebra is a one-dimensional vector space, multiples of $J = \begin{bmatrix}0&-1\\1&0\end{bmatrix} .$ Here the bracket always vanishes, which tells us that, in two dimensions, rotations commute. Not so in any higher dimension. For 3×3 rotation matrices, we have a three-dimensional vector space with the convenient basis $A_{\bold{x}} = \begin{bmatrix}0&0&0\\0&0&-1\\0&1&0\end{bmatrix} , \quad A_{\bold{y}} = \begin{bmatrix}0&0&1\\0&0&0\\-1&0&0\end{bmatrix} , \quad A_{\bold{z}} = \begin{bmatrix}0&-1&0\\1&0&0\\0&0&0\end{bmatrix} .$ The Lie brackets of these generators are as follows $[A_{\bold{x}}, A_{\bold{y}}] = A_{\bold{z}}, \quad [A_{\bold{z}}, A_{\bold{x}}] = A_{\bold{y}}, \quad [A_{\bold{y}}, A_{\bold{z}}] = A_{\bold{x}}.$ We can conveniently identify any matrix in this Lie algebra with a vector in R3, $\begin{align} \boldsymbol{\omega} &{}= (x,y,z) \\ \tilde{\boldsymbol{\omega}} &{}= x A_{\bold{x}} + y A_{\bold{y}} + z A_{\bold{z}} \\ &{}= \begin{bmatrix}0&-z&y\\z&0&-x\\-y&x&0\end{bmatrix} . \end{align}$ Under this identification, the so(3) bracket has a memorable description; it is the vector cross product, $[\tilde{\bold{u}},\tilde{\bold{v}}] = (\bold{u} \times \bold{v})^{\sim} . \,\!$ The matrix identified with a vector v is also memorable, because $\tilde{\bold{v}} \bold{u} = \bold{v} \times \bold{u} . \,\!$ Notice this implies that v is in the null space of the skew-symmetric matrix with which it is identified, because v×v is always the zero vector. ### Exponential map Connecting the Lie algebra to the Lie group is the exponential map, which we define using the familiar power series for ex (Wedderburn 1934, §8.02), $\begin{align} \exp \colon \mathfrak{so}(n) &{}\to SO(n) \\ A &{}\mapsto I + A + \tfrac{1}{2} A^2 + \tfrac{1}{6} A^3 + \cdots + \tfrac{1}{k!} A^k + \cdots \\ &{}= \sum_{k=0}^{\infty} \frac{1}{k!} A^k. \end{align}$ For any skew-symmetric A, exp(A) is always a rotation matrix. An important practical example is the 3×3 case, where we have seen we can identify every skew-symmetric matrix with a vector ω = uθ, where u = (x,y,z) is a unit magnitude vector. Recall that u is in the null space of the matrix associated with ω, so that if we use a basis with u as the z axis the final column and row will be zero. Thus we know in advance that the exponential matrix must leave u fixed. It is mathematically impossible to supply a straightforward formula for such a basis as a function of u (its existence would violate the hairy ball theorem), but direct exponentiation is possible, and yields $\begin{align} \exp( \tilde{\boldsymbol{\omega}} ) &{}= \exp \left( \begin{bmatrix} 0 & -z \theta & y \theta \\ z \theta & 0&-x \theta \\ -y \theta & x \theta & 0 \end{bmatrix} \right) \\ &{}= \begin{bmatrix} 2 (x^2 - 1) s^2 + 1 & 2 x y s^2 - 2 z c s & 2 x z s^2 + 2 y c s \\ 2 x y s^2 + 2 z c s & 2 (y^2 - 1) s^2 + 1 & 2 y z s^2 - 2 x c s \\ 2 x z s^2 - 2 y c s & 2 y z s^2 + 2 x c s & 2 (z^2 - 1) s^2 + 1 \end{bmatrix} , \end{align}$ where c = cos θ⁄2, s = sin θ⁄2. We recognize this as our matrix for a rotation around axis u by angle θ. We also note that this mapping of skew-symmetric matrices is quite different from the Cayley transform discussed earlier. In any dimension, if we choose some nonzero A and consider all its scalar multiples, exponentiation yields rotation matrices along a geodesic of the group manifold, forming a one-parameter subgroup of the Lie group. More broadly, the exponential map provides a homeomorphism between a neighborhood of the origin in the Lie algebra and a neighborhood of the identity in the Lie group. In fact, we can produce any rotation matrix as the exponential of some skew-symmetric matrix, so for these groups the exponential map is a surjection. ### Baker–Campbell–Hausdorff formula Suppose we are given A and B in the Lie algebra. Their exponentials, exp(A) and exp(B), are rotation matrices, which we can multiply. Since the exponential map is a surjection, we know that for some C in the Lie algebra, exp(A)exp(B) = exp(C), and we write $A \ast B = C . \,\!$ When exp(A) and exp(B) commute (which always happens for 2×2 matrices, but not higher), then C = A+B, mimicking the behavior of complex exponentiation. The general case is given by the BCH formula, a series expanded in terms of the bracket (Hall 2004, Ch. 3; Varadarajan 1984, §2.15). For matrices, the bracket is the same operation as the commutator, which detects lack of commutativity in multiplication. The general formula begins as follows. $A \ast B = A + B + \tfrac12 [A,B] + \tfrac{1}{12} [A,[A,B]] - \tfrac{1}{12} [B,[A,B]] - \cdots \,\!$ Representation of a rotation matrix as a sequential angle decomposition, as in Euler angles, may tempt us to treat rotations as a vector space, but the higher order terms in the BCH formula reveal that to be a mistake. We again take special interest in the 3×3 case, where [A,B] equals the cross product, A×B. If A and B are linearly independent, then A, B, and A×B can be used as a basis; if not, then A and B commute. And conveniently, in this dimension the summation in the BCH formula has a closed form (Engø 2001) as αA+βB+γ(A×B). ### Spin group The Lie group of n×n rotation matrices, SO(n), is a compact and path-connected manifold, and thus locally compact and connected. However, it is not simply connected, so Lie theory tells us it is a kind of "shadow" (a homomorphic image) of a universal covering group. Often the covering group, which in this case is the spin group denoted by Spin(n), is simpler and more natural to work with (Baker 2003, Ch. 5; Fulton & Harris 1991, pp. 299–315). In the case of planar rotations, SO(2) is topologically a circle, S1. Its universal covering group, Spin(2), is isomorphic to the real line, R, under addition. In other words, whenever we use angles of arbitrary magnitude, which we often do, we are essentially taking advantage of the convenience of the "mother space". Every 2×2 rotation matrix is produced by a countable infinity of angles, separated by integer multiples of 2π. Correspondingly, the fundamental group of SO(2) is isomorphic to the integers, Z. In the case of spatial rotations, SO(3) is topologically equivalent to three-dimensional real projective space, RP3. Its universal covering group, Spin(3), is isomorphic to the 3-sphere, S3. Every 3×3 rotation matrix is produced by two opposite points on the sphere. Correspondingly, the fundamental group of SO(3) is isomorphic to the two-element group, Z2. We can also describe Spin(3) as isomorphic to quaternions of unit norm under multiplication, or to certain 4×4 real matrices, or to 2×2 complex special unitary matrices. Concretely, a unit quaternion, q, with $\begin{align} q &{}= w + \bold{i}x + \bold{j}y + \bold{k}z , \\ 1 &{}= w^2 + x^2 + y^2 + z^2 , \end{align}$ produces the rotation matrix $Q = \begin{bmatrix} 1 - 2 y^2 - 2 z^2 & 2 x y - 2 z w & 2 x z + 2 y w \\ 2 x y + 2 z w & 1 - 2 x^2 - 2 z^2 & 2 y z - 2 x w \\ 2 x z - 2 y w & 2 y z + 2 x w & 1 - 2 x^2 - 2 y^2 \end{bmatrix} .$ This is our third version of this matrix, here as a rotation around non-unit axis vector (x,y,z) by angle 2θ, where cos θ = w and |sin θ| = ||(x,y,z)||. (The proper sign for sin θ is implied once the signs of the axis components are decided.) Many features of this case are the same for higher dimensions. The coverings are all two-to-one, with SO(n), n > 2, having fundamental group Z2. The natural setting for these groups is within a Clifford algebra. And the action of the rotations is produced by a kind of "sandwich", denoted by qvq∗. ## Infinitesimal rotations The matrices in the Lie algebra are not themselves rotations; the skew-symmetric matrices are derivatives, proportional differences of rotations. An actual "differential rotation", or infinitesimal rotation matrix has the form $I + A \, d\theta , \,\!$ where dθ is vanishingly small. These matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals (Goldstein, Poole & Safko 2002, §4.8). To understand what this means, consider $dA_{\bold{x}} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & -d\theta \\ 0 & d\theta & 1 \end{bmatrix} .$ We first test the orthogonality condition, QTQ = I. The product is $dA_{\bold{x}}^T \, dA_{\bold{x}} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1+d\theta^2 & 0 \\ 0 & 0 & 1+d\theta^2 \end{bmatrix} ,$ differing from an identity matrix by second order infinitesimals, which we discard. So to first order, an infinitesimal rotation matrix is an orthogonal matrix. Next we examine the square of the matrix. $dA_{\bold{x}}^2 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1-d\theta^2 & -2d\theta \\ 0 & 2d\theta & 1-d\theta^2 \end{bmatrix}.$ Again discarding second order effects, we see that the angle simply doubles. This hints at the most essential difference in behavior, which we can exhibit with the assistance of a second infinitesimal rotation, $dA_{\bold{y}} = \begin{bmatrix} 1 & 0 & d\phi \\ 0 & 1 & 0 \\ -d\phi & 0 & 1 \end{bmatrix} .$ Compare the products dAxdAy and dAydAx. $\begin{align} dA_{\bold{x}}\,dA_{\bold{y}} &{}= \begin{bmatrix} 1 & 0 & d\phi \\ d\theta\,d\phi & 1 & -d\theta \\ -d\phi & d\theta & 1 \end{bmatrix} \\ dA_{\bold{y}}\,dA_{\bold{x}} &{}= \begin{bmatrix} 1 & d\theta\,d\phi & d\phi \\ 0 & 1 & -d\theta \\ -d\phi & d\theta & 1 \end{bmatrix}. \\ \end{align}$ Since dθ dφ is second order, we discard it; thus, to first order, multiplication of infinitesimal rotation matrices is commutative. In fact, $dA_{\bold{x}}\,dA_{\bold{y}} = dA_{\bold{y}}\,dA_{\bold{x}} , \,\!$ again to first order. Put in other words, the order in which infinitesimal rotations are applied is irrelevant, this useful fact makes, for example, derivation of rigid body rotation relatively simple. But we must always be careful to distinguish (the first order treatment of) these infinitesimal rotation matrices from both finite rotation matrices and from derivatives of rotation matrices (namely skew-symmetric matrices). Contrast the behavior of finite rotation matrices in the BCH formula with that of infinitesimal rotation matrices, where all the commutator terms will be second order infinitesimals so we do have a vector space. ## Conversions We have seen the existence of several decompositions that apply in any dimension, namely independent planes, sequential angles, and nested dimensions. In all these cases we can either decompose a matrix or construct one. We have also given special attention to 3×3 rotation matrices, and these warrant further attention, in both directions (Stuelpnagel 1964). ### Quaternion Main article: Quaternions and spatial rotation Given the unit quaternion q = (w,x,y,z), the equivalent left-handed (Post-Multiplied) 3×3 rotation matrix is $Q = \begin{bmatrix} 1 - 2 y^2 - 2 z^2 & 2 x y - 2 z w & 2 x z + 2 y w \\ 2 x y + 2 z w & 1 - 2 x^2 - 2 z^2 & 2 y z - 2 x w \\ 2 x z - 2 y w & 2 y z + 2 x w & 1 - 2 x^2 - 2 y^2 \end{bmatrix} .$ Now every quaternion component appears multiplied by two in a term of degree two, and if all such terms are zero what's left is an identity matrix. This leads to an efficient, robust conversion from any quaternion — whether unit, nonunit, or even zero — to a 3×3 rotation matrix. ```Nq = w^2 + x^2 + y^2 + z^2 if Nq > 0.0 then s = 2/Nq else s = 0.0 X = x*s; Y = y*s; Z = z*s wX = w*X; wY = w*Y; wZ = w*Z xX = x*X; xY = x*Y; xZ = x*Z yY = y*Y; yZ = y*Z; zZ = z*Z [ 1.0-(yY+zZ) xY-wZ xZ+wY ] [ xY+wZ 1.0-(xX+zZ) yZ-wX ] [ xZ-wY yZ+wX 1.0-(xX+yY) ] ``` Freed from the demand for a unit quaternion, we find that nonzero quaternions act as homogeneous coordinates for 3×3 rotation matrices. The Cayley transform, discussed earlier, is obtained by scaling the quaternion so that its w component is 1. For a 180° rotation around any axis, w will be zero, which explains the Cayley limitation. The sum of the entries along the main diagonal (the trace), plus one, equals 4−4(x2+y2+z2), which is 4w2. Thus we can write the trace itself as 2w2+2w2−1; and from the previous version of the matrix we see that the diagonal entries themselves have the same form: 2x2+2w2−1, 2y2+2w2−1, and 2z2+2w2−1. So we can easily compare the magnitudes of all four quaternion components using the matrix diagonal. We can, in fact, obtain all four magnitudes using sums and square roots, and choose consistent signs using the skew-symmetric part of the off-diagonal entries. ```t = Qxx+Qyy+Qzz (trace of Q) r = sqrt(1+t) w = 0.5*r x = copysign(0.5*sqrt(1+Qxx-Qyy-Qzz), Qzy-Qyz) y = copysign(0.5*sqrt(1-Qxx+Qyy-Qzz), Qxz-Qzx) z = copysign(0.5*sqrt(1-Qxx-Qyy+Qzz), Qyx-Qxy) ``` where copysign(x,y) is x with the sign of y: $\operatorname{copysign}(x,y) = \operatorname{sign}(y) \; |x|.$ Alternatively, use a single square root and division ```t = Qxx+Qyy+Qzz r = sqrt(1+t) s = 0.5/r w = 0.5*r x = (Qzy-Qyz)*s y = (Qxz-Qzx)*s z = (Qyx-Qxy)*s ``` This is numerically stable so long as the trace, t, is not negative; otherwise, we risk dividing by (nearly) zero. In that case, suppose Qxx is the largest diagonal entry, so x will have the largest magnitude (the other cases are similar); then the following is safe. ```t = Qxx+Qyy+Qzz r = sqrt(1+Qxx-Qyy-Qzz) s = 0.5/r w = (Qzy-Qyz)*s x = 0.5*r y = (Qxy+Qyx)*s z = (Qzx+Qxz)*s ``` If the matrix contains significant error, such as accumulated numerical error, we may construct a symmetric 4×4 matrix, $K = \frac13 \begin{bmatrix} Q_{xx}-Q_{yy}-Q_{zz} & Q_{yx}+Q_{xy} & Q_{zx}+Q_{xz} & Q_{yz}-Q_{zy} \\ Q_{yx}+Q_{xy} & Q_{yy}-Q_{xx}-Q_{zz} & Q_{zy}+Q_{yz} & Q_{zx}-Q_{xz} \\ Q_{zx}+Q_{xz} & Q_{zy}+Q_{yz} & Q_{zz}-Q_{xx}-Q_{yy} & Q_{xy}-Q_{yx} \\ Q_{yz}-Q_{zy} & Q_{zx}-Q_{xz} & Q_{xy}-Q_{yx} & Q_{xx}+Q_{yy}+Q_{zz} \end{bmatrix} ,$ and find the eigenvector, (w,x,y,z), of its largest magnitude eigenvalue. (If Q is truly a rotation matrix, that value will be 1.) The quaternion so obtained will correspond to the rotation matrix closest to the given matrix[dubious ] (Bar-Itzhack 2000). ### Polar decomposition If the n×n matrix M is non-singular, its columns are linearly independent vectors; thus the Gram–Schmidt process can adjust them to be an orthonormal basis. Stated in terms of numerical linear algebra, we convert M to an orthogonal matrix, Q, using QR decomposition. However, we often prefer a Q "closest" to M, which this method does not accomplish. For that, the tool we want is the polar decomposition (Fan & Hoffman 1955; Higham 1989). To measure closeness, we may use any matrix norm invariant under orthogonal transformations. A convenient choice is the Frobenius norm, ||Q−M||F, squared, which is the sum of the squares of the element differences. Writing this in terms of the trace, Tr, our goal is, • Find Q minimizing Tr( (Q−M)T(Q−M) ), subject to QTQ = I. Though written in matrix terms, the objective function is just a quadratic polynomial. We can minimize it in the usual way, by finding where its derivative is zero. For a 3×3 matrix, the orthogonality constraint implies six scalar equalities that the entries of Q must satisfy. To incorporate the constraint(s), we may employ a standard technique, Lagrange multipliers, assembled as a symmetric matrix, Y. Thus our method is: • Differentiate Tr( (Q−M)T(Q−M) + (QTQ−I)Y ) with respect to (the entries of) Q, and equate to zero. Consider a 2×2 example. Including constraints, we seek to minimize $\begin{align} &\scriptstyle{ (Q_{xx}-M_{xx})^2 + (Q_{xy}-M_{xy})^2 } \\ &\scriptstyle{ {} + (Q_{yx}-M_{yx})^2 + (Q_{yy}-M_{yy})^2 } \\ &\scriptstyle{ {} + (Q_{xx}^2+Q_{yx}^2-1)Y_{xx} + (Q_{xy}^2+Q_{yy}^2-1)Y_{yy} } \\ &\scriptstyle{ {} + 2(Q_{xx} Q_{xy} + Q_{yx} Q_{yy})Y_{xy} . } \end{align}$ Taking the derivative with respect to Qxx, Qxy, Qyx, Qyy in turn, we assemble a matrix. $\scriptstyle{ 2 \begin{bmatrix} \scriptstyle{ Q_{xx}-M_{xx} + Q_{xx} Y_{xx} + Q_{xy} Y_{xy} } & \scriptstyle{ Q_{xy}-M_{xy} + Q_{xx} Y_{xy} + Q_{xy} Y_{yy} } \\ \scriptstyle{ Q_{yx}-M_{yx} + Q_{yx} Y_{xx} + Q_{yy} Y_{xy} } & \scriptstyle{ Q_{yy}-M_{yy} + Q_{yx} Y_{xy} + Q_{yy} Y_{yy} } \end{bmatrix}}$ In general, we obtain the equation $0 = 2(Q-M) + 2QY , \,\!$ so that $M = Q(I+Y) = QS , \,\!$ where Q is orthogonal and S is symmetric. To ensure a minimum, the Y matrix (and hence S) must be positive definite. Linear algebra calls QS the polar decomposition of M, with S the positive square root of S2 = MTM. $S^2 = (Q^T M)^T (Q^T M) = M^T Q Q^ T M = M^T M \,\!$ When M is non-singular, the Q and S factors of the polar decomposition are uniquely determined. However, the determinant of S is positive because S is positive definite, so Q inherits the sign of the determinant of M. That is, Q is only guaranteed to be orthogonal, not a rotation matrix. This is unavoidable; an M with negative determinant has no uniquely defined closest rotation matrix. ### Axis and angle Main article: Axis-angle representation To efficiently construct a rotation matrix Q from an angle θ and a unit axis u, we can take advantage of symmetry and skew-symmetry within the entries. If x, y, and z are the components of the unit vector representing the axis, and $\begin{align} c &=& \cos \theta\\ s &=& \sin \theta\\ C &=& 1-c\end{align}$ then $Q(\theta) = \begin{bmatrix} xxC+c & xyC-zs & xzC+ys\\ yxC+zs & yyC+c & yzC-xs\\ zxC-ys & zyC+xs & zzC+c \end{bmatrix}$ Determining an axis and angle, like determining a quaternion, is only possible up to sign; that is, (u,θ) and (−u,−θ) correspond to the same rotation matrix, just like q and −q. As well, axis-angle extraction presents additional difficulties. The angle can be restricted to be from 0° to 180°, but angles are formally ambiguous by multiples of 360°. When the angle is zero, the axis is undefined. When the angle is 180°, the matrix becomes symmetric, which has implications in extracting the axis. Near multiples of 180°, care is needed to avoid numerical problems: in extracting the angle, a two-argument arctangent with atan2(sin θ,cos θ) equal to θ avoids the insensitivity of arccosine; and in computing the axis magnitude to force unit magnitude, a brute-force approach can lose accuracy through underflow (Moler & Morrison 1983). A partial approach is as follows: $\begin{align} x &=& Q_{zy} - Q_{yz}\\ y &=& Q_{xz} - Q_{zx}\\ z &=& Q_{yx} - Q_{xy}\\ r &=& \sqrt{x^2 + y^2 + z^2}\\ t &=& Q_{xx} + Q_{yy} + Q_{zz}\\ \theta &=& \mbox{atan2}(r,t-1)\end{align}$ The x, y, and z components of the axis would then be divided by r. A fully robust approach will use different code when t, the trace of the matrix Q, is negative, as with quaternion extraction. When r is zero because the angle is zero, an axis must be provided from some source other than the matrix. ### Euler angles Complexity of conversion escalates with Euler angles (used here in the broad sense). The first difficulty is to establish which of the twenty-four variations of Cartesian axis order we will use. Suppose the three angles are θ1, θ2, θ3; physics and chemistry may interpret these as $Q(\theta_1,\theta_2,\theta_3)= Q_{\bold{x}}(\theta_1) Q_{\bold{y}}(\theta_2) Q_{\bold{z}}(\theta_3) , \,\!$ while aircraft dynamics may use $Q(\theta_1,\theta_2,\theta_3)= Q_{\bold{z}}(\theta_3) Q_{\bold{y}}(\theta_2) Q_{\bold{x}}(\theta_1) . \,\!$ One systematic approach begins with choosing the right-most axis. Among all permutations of (x,y,z), only two place that axis first; one is an even permutation and the other odd. Choosing parity thus establishes the middle axis. That leaves two choices for the left-most axis, either duplicating the first or not. These three choices gives us 3×2×2 = 12 variations; we double that to 24 by choosing static or rotating axes. This is enough to construct a matrix from angles, but triples differing in many ways can give the same rotation matrix. For example, suppose we use the zyz convention above; then we have the following equivalent pairs: | | | | | | | | | |-------|------|--------|----|---------|--------|-------|--------------------| | (90°, | 45°, | −105°) | ≡ | (−270°, | −315°, | 255°) | multiples of 360° | | (72°, | 0°, | 0°) | ≡ | (40°, | 0°, | 32°) | singular alignment | | (45°, | 60°, | −30°) | ≡ | (−135°, | −60°, | 150°) | bistable flip | Angles for any order can be found using a concise common routine (Herter & Lott 1993; Shoemake 1994). The problem of singular alignment, the mathematical analog of physical gimbal lock, occurs when the middle rotation aligns the axes of the first and last rotations. It afflicts every axis order at either even or odd multiples of 90°. These singularities are not characteristic of the rotation matrix as such, and only occur with the usage of Euler angles. The singularities are avoided when considering and manipulating the rotation matrix as orthonormal row vectors (in 3D applications often named 'right'-vector, 'up'-vector and 'out'-vector) instead of as angles. The singularities are also avoided when working with quaternions. ## Uniform random rotation matrices We sometimes need to generate a uniformly distributed random rotation matrix. It seems intuitively clear in two dimensions that this means the rotation angle is uniformly distributed between 0 and 2π. That intuition is correct, but does not carry over to higher dimensions. For example, if we decompose 3×3 rotation matrices in axis-angle form, the angle should not be uniformly distributed; the probability that (the magnitude of) the angle is at most θ should be 1⁄π(θ − sin θ), for 0 ≤ θ ≤ π. Since SO(n) is a connected and locally compact Lie group, we have a simple standard criterion for uniformity, namely that the distribution be unchanged when composed with any arbitrary rotation (a Lie group "translation"). This definition corresponds to what is called Haar measure. León, Massé & Rivest (2006) show how to use the Cayley transform to generate and test matrices according to this criterion. We can also generate a uniform distribution in any dimension using the subgroup algorithm of Diaconis & Shashahani (1987). This recursively exploits the nested dimensions group structure of SO(n), as follows. Generate a uniform angle and construct a 2×2 rotation matrix. To step from n to n+1, generate a vector v uniformly distributed on the n-sphere, Sn, embed the n×n matrix in the next larger size with last column (0,…,0,1), and rotate the larger matrix so the last column becomes v. As usual, we have special alternatives for the 3×3 case. Each of these methods begins with three independent random scalars uniformly distributed on the unit interval. Arvo (1992) takes advantage of the odd dimension to change a Householder reflection to a rotation by negation, and uses that to aim the axis of a uniform planar rotation. Another method uses unit quaternions. Multiplication of rotation matrices is homomorphic to multiplication of quaternions, and multiplication by a unit quaternion rotates the unit sphere. Since the homomorphism is a local isometry, we immediately conclude that to produce a uniform distribution on SO(3) we may use a uniform distribution on S3. Euler angles can also be used, though not with each angle uniformly distributed (Murnaghan 1962; Miles 1965). For the axis-angle form, the axis is uniformly distributed over the unit sphere of directions, S2, while the angle has the non-uniform distribution over [0,π] noted previously (Miles 1965). ## Notes 1. 2. Taylor, Camillo; Kriegman (1994). "Minimization on the Lie Group SO(3) and Related Manifolds". Technical Report. No. 9405 (Yale University). ## References • Arvo, James (1992), "Fast random rotation matrices", in David Kirk, Graphics Gems III, San Diego: Academic Press Professional, pp. 117–120, ISBN 978-0-12-409671-4 • Baker, Andrew (2003), Matrix Groups: An Introduction to Lie Group Theory, Springer, ISBN 978-1-85233-470-3 • Bar-Itzhack, Itzhack Y. (Nov.–Dec. 2000), "New method for extracting the quaternion from a rotation matrix", AIAA Journal of Guidance, Control and Dynamics 23 (6): 1085–1087, doi:10.2514/2.4654, ISSN 0731-5090 • Björck, Åke; Bowie, Clazett (June 1971), "An iterative algorithm for computing the best estimate of an orthogonal matrix", 8 (2): 358–364, doi:10.1137/0708036, ISSN 0036-1429 • Cayley, Arthur (1846), "Sur quelques propriétés des déterminants gauches", 32: 119–123, ISSN 0075-4102 ; reprinted as article 52 in Cayley, Arthur (1889), The collected mathematical papers of Arthur Cayley, I (1841–1853), Cambridge University Press, pp. 332–336 • Diaconis, Persi; Shahshahani, Mehrdad (1987), "The subgroup algorithm for generating uniform random variables", Probability in the Engineering and Informational Sciences 1: 15–32, doi:10.1017/S0269964800000255, ISSN 0269-9648 • Engø, Kenth (June 2001), "On the BCH-formula in so(3)", BIT Numerical Mathematics 41 (3): 629–632, doi:10.1023/A:1021979515229, ISSN 0006-3835 • Fan, Ky; Hoffman, Alan J. (February 1955), "Some metric inequalities in the space of matrices", (Proceedings of the American Mathematical Society, Vol. 6, No. 1) 6 (1): 111–116, doi:10.2307/2032662, ISSN 0002-9939, JSTOR 2032662 • Fulton, William; Harris, Joe (1991), Representation Theory: A First Course, GTM 129, New York, Berlin, Heidelberg: Springer, ISBN 978-0-387-97495-8, MR 1153249 • Goldstein, Herbert; Poole, Charles P.; Safko, John L. (2002), Classical Mechanics (third ed.), Addison Wesley, ISBN 978-0-201-65702-9 • Hall, Brian C. (2004), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Springer, ISBN 978-0-387-40122-5  (GTM 222) • Herter, Thomas; Lott, Klaus (September–October 1993), "Algorithms for decomposing 3-D orthogonal matrices into primitive rotations", Computers & Graphics 17 (5): 517–527, doi:10.1016/0097-8493(93)90003-R, ISSN 0097-8493 • Higham, Nicholas J. (October 1 1989), "Matrix nearness problems and applications", in Gover, Michael J. C.; Barnett, Stephen, Applications of Matrix Theory, Oxford University Press, pp. 1–27, ISBN 978-0-19-853625-3 • León, Carlos A.; Massé, Jean-Claude; Rivest, Louis-Paul (February 2006), "A statistical model for random rotations", Journal of Multivariate Analysis 97 (2): 412–430, doi:10.1016/j.jmva.2005.03.009, ISSN 0047-259X • Miles, Roger E. (December 1965), "On random rotations in R3", (Biometrika, Vol. 52, No. 3/4) 52 (3/4): 636–639, doi:10.2307/2333716, ISSN 0006-3444, JSTOR 2333716 • Moler, Cleve; Morrison, Donald (1983), "Replacing square roots by pythagorean sums", IBM Journal of Research and Development 27 (6): 577–581, doi:10.1147/rd.276.0577, ISSN 0018-8646 • Murnaghan, Francis D. (1950), "The element of volume of the rotation group", 36 (11): 670–672, doi:10.1073/pnas.36.11.670, ISSN 0027-8424 • Murnaghan, Francis D. (1962), The Unitary and Rotation Groups, Lectures on applied mathematics, Washington: Spartan Books • Cayley, Arthur (1889), The collected mathematical papers of Arthur Cayley, I (1841–1853), Cambridge University Press, pp. 332–336 • Paeth, Alan W. (1986), "A Fast Algorithm for General Raster Rotation", Proceedings, Graphics Interface '86: 77–81 • Pique, Michael E. (1990), "Rotation Tools", in Andrew S. Glassner, Graphics Gems, San Diego: Academic Press Professional, pp. 465–469, ISBN 978-0-12-286166-6 • Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), "Section 21.5.2. Picking a Random Rotation Matrix", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 • Shepperd, Stanley W. (May–June 1978), "Quaternion from rotation matrix", AIAA Journal of Guidance, Control and Dynamics 1 (3): 223–224, ISSN 0731-5090 • Shoemake, Ken (1994), "Euler angle conversion", in Paul Heckbert, Graphics Gems IV, San Diego: Academic Press Professional, pp. 222–229, ISBN 978-0-12-336155-4 • Stuelpnagel, John (October 1964), "On the parameterization of the three-dimensional rotation group", SIAM Review 6 (4): 422–430, doi:10.1137/1006093, ISSN 0036-1445  (Also NASA-CR-53568.) • Varadarajan, Veeravalli S. (1984), Lie Groups, Lie Algebras, and Their Representation, Springer, ISBN 978-0-387-90969-1  (GTM 102) •
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 124, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8631858229637146, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/204277-implicit-differentiation.html
# Thread: 1. ## Implicit Differentiation I've tried differentiating this over again, and took me a while to get it. but Im not sure, my answer were (1/3, 4/3) heres the question At what point on the curve xy=(1-x-y)^2 is the tangent line parallel to the x-axis. Thank you very much 2. ## Re: Implicit Differentiation Originally Posted by kspkido I've tried differentiating this over again, and took me a while to get it. but Im not sure, my answer were (1/3, 4/3) heres the question At what point on the curve xy=(1-x-y)^2 is the tangent line parallel to the x-axis. Thank you very much Hint: If the tangent line is parallel to the x axis, then the gradient is 0. Therefore, you need to evaluate the point where dy/dx = 0. 3. ## Re: Implicit Differentiation You found 1 point, but there is another. 4. ## Re: Implicit Differentiation wow. okay thanks..but okay to tell you the truth, i didn't actually solve the problem i just saw the answer at the back of the book, I know the process differentiate it then set the dy/dx zero since parallel means same slope and x-axis is is horizontal meaning slope is zero.. but what i encountered is I don't really know how to get the rest of the point...i tried substituting the y to x in the dy/dx formula but i can never separate y from x... so there's 2 integers on dy/dx=-(y-2-2x)/(2+2x+4y) 5. ## Re: Implicit Differentiation Implicitly differentiating, you should find: $\frac{dy}{dx}=\frac{2x+y-2}{2-x-2y}$ Equating this to zero implies: $y=2(1-x)$ Substituting this into the original equation gives: $-2x(x-1)=(x-1)^2$ $(x-1)(3x-1)=0$ $x=\frac{1}{3},1$ So the two points are: $\left(\frac{1}{3},\frac{4}{3} \right),(1,0)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397146105766296, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/2455/what-happens-to-light-after-it-enters-an-eye/2459
# What happens to light after it enters an eye What happens to the light [energy] after it enters an eye and hits the rods and cones? I presume the energy becomes electrical, and it must be near 100% perfect, else our eyes would heat up? Or am I missing something? The motivation of this question is solar panel technology. - Nice question ! – user346 Jan 2 '11 at 18:58 1 Lot of that light reflects back to where it came from. Think about "red eye" effect when shooting people with lighting on. Also, eye is of course getting a little warmer (to the point that you can evaporate the retina with laser) but the effect should not be terribly huge to cause damage usually. As for the efficiency of conversion to electric signals, I am not sure. Let's whether someone else has something to say. – Marek Jan 2 '11 at 19:01 1 @Marek: Actually very little of that light is reflected, which is why your pupils appear dark. Red eye in photographs occurs because the camera flash, while brief, is very intense. Your eyes do not have time to react, so they are more dilated than they would be under continuous illumination of that intensity. In this case, enough bounces back for the camera to detect it, but it isn't typically very much. If it were then everybody would look red-eyed all the time. How creepy would that look! :) – Colin K Feb 4 '11 at 5:14 – Marek Feb 4 '11 at 9:11 ## 5 Answers There is some heating that takes place, but the amount is pretty trivial, because there just isn't that much light reaching the back of your eye. A back-of-the-envelope sort of estimate would be to say that the light of the Sun reaching the Earth's surface amounts to about a kilowatt of radiation per square meter. Your pupils have a radius of maybe a millimeter, probably much less in bright sunlight. So, if you're staring directly at the Sun (which, hopefully, you're not really doing), you're getting at most a few milliwatts delivered to the back of your eye. That's not going to tax the temperature regulation systems in your body, given that a living human generates about the same heat as a hundred-watt light bulb. If you dramatically increase the amount of light delivered to your eye, say by accidentally catching a high-power pulsed laser in the eye, you can overwhelm the body's ability to carry away the heat, and do real damage. The pulsed-laser lab next to the office where I did my undergrad thesis research had a sign on the door explaining in gruesome detail what would happen if you were to catch a full YAG laser pulse in the eye, which involved the boiling retina basically blasting your eyeball out of your skull. Which is why you wear safety glasses in those labs, and knock before entering any optics lab. - The head is quite well cooled, which is necessary: the brain clocks in at ~20W. A few extra mW from the eyes indeed don't matter. The laser danger is from in localized heating; the body as a whole could handle several W extra. (which is what your typical tabletop laser will produce) – MSalters Jan 4 '11 at 14:38 Other signs at laser labs say: "Refrain from looking into the laser beam with your 'remaining' eye" – Lagerbaer Feb 4 '11 at 4:59 Your reasoning is correct. Our eyes do heat up. The constant flow of blood is likely what carries away the heat. Elaborate explanation: all biological processes are ultimately thermodynamical processes. Any "operation" - such as a single cycle of ATP production by ATP synthase, or the conversion of incident photons into metabolic energy (photosynthesis) or into neuronal impulses (vision) - can be thought of as an engine with a certain efficiency $e$. This is traditionally given by the ratio of the amount of useful work $W$ the engine performs in a single cycle to the total amount of energy $E$ fed into it: $$e = \frac{W}{E}$$ and since $E = W + Q$, where $Q$ is the waste heat released into the environment, we have: $$e = 1 - \frac{Q}{E}$$. The second part of your question as I interpret it is: "What pressures of natural selection lead to the evolution of vision?". The simplest answer I can think of is the exposure of photo-reactive compounds to a constant bath of photons (from the Sun) over millions of years lead to the evolution of different mechanisms to exploit this energy, one of those mechanisms being that of vision. Of course, this is not really an answer. Maybe somebody with greater knowledge of evolutionary biology can shed more light on this aspect. - Well since this is the most complete answer, I'll try to explain the last question of Jonathan here. There are biological solar panels and they are called chlorophylls. They are responsible for photosynthesis(although chlorophyll is not the only substance capable of doing so, it is the most common). Now why do WE not have chlorophylls? It is simply because photosynthesis itself isn't that efficient in supplying a mobile and agile system like an animal's. Even plants do not use it directly to generate energy directly, they rather use it to produce glucose, which they oxidate later on. – Cem Jan 3 '11 at 1:03 @Cem I think the gist of the problem lies, in your words, in the fact that "photosynthesis itself isn't that efficient". Evolution is associated with the creation of mechanisms with each one being more efficient than its predecessors. The interesting question is how a system switches between states with different efficiencies during the course of evolution. – user346 Jan 3 '11 at 1:16 @space_cadet: Something being more efficient than its predecessor does not mean that it is efficient enough. This is called a fallacy actually. Furthermore, please read the whole sentence: isn't that efficient in supplying a mobile and agile system like an animal's. To generate enough energy purely with photosynthesis, you'd need to be travelling around with a few meter squares of green panels hovering above your head, which I assume would reduce your mobility. – Cem Jan 3 '11 at 20:03 @cem I thought I was agreeing with what you said. And I agree with your observation about the size of photosynthetic surfaces for "mobile plants". To clarify, the efficiency of a system is always measured w.r.t its operating environment. For instance, humans are efficient (in that they stay alive) in an oxygen rich environment, but inefficient (i.e. not alive) in an oxygen-poor background. So when we say that one mechanism is more efficient than another that aspect has to be kept in mind. – user346 Jan 3 '11 at 21:39 Yeah exactly. I think I misinterpreted your first comment and thought that you were implying something totally different. Your second comment clarified much for me. And yes, exactly as you say, efficiency is completely dependent on the situation. – Cem Jan 4 '11 at 9:54 You are correct, that the light becomes an electrical charge from our eyes, and the resulting signal is processed by a very complex system. The rest of it will heat up our eyes, but realize that the same thing is true of light hitting our body anywhere. The signal generated from our eyes is minimal, I wouldn't use the word solar panel in the slightest to describe the affect at all. - "The second part of your question as I interpret it is: "What pressures of natural selection lead to the evolution of vision?" algae grows at sunny spots. having a means to detect these sunny spots gives a greater chance of having more food and thus more offspring. - In humans (and vertebrates) the retinal structure evolved basically inverted in that the photons have to pass through layers of neurons and blood vessels before they hit the rods and cones. Wolves and some other vertebrates have a reflective membrane(the tapetum lucidum) behind the photoreceptors that reflect the photons back through the rods and cones and provide a double pass amplification (causing "eyeshine"). Humans lack this. The retina has ten layers which process signals from the photoreceptors, whose surface membranes contain retinol, which is isomerised by the photon energy, which affects ion channels, and may result in an action potential (necessarily leaving out all sorts of steps) which is preprocessed by retinal ganglion cells and information compressed and conducted along the optic nerve axons to the visual processing areas of the brain. Vision has been extensively studied and is immensely complicated. Much of the energy takes part in chemical reactions and in multiple action potentials (electrochemical) in the preprocessing in the 10 layer retina. Francis Crick (physicist, of DNA fame) and Christof Koch have written much about it--Koch's book, The Quest for Consciousness is fairly technical neuroscience focussing on the visual system. - ""and provide a double pass amplification "" Think about the meaning of "amplification"! – Georg Feb 4 '11 at 10:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.956858217716217, "perplexity_flag": "middle"}
http://regularize.wordpress.com/category/math/signal-and-image-processing/
# regularize Trying to keep track of what I stumble upon ### Signal and image processing Archived Posts from this Category August 25, 2012 ## ISMP over – non-convex and non-smooth minimization, l^1 and l^p Posted by Dirk under Conference, Math, Optimization, Signal and image processing, Sparsity | Tags: Basis pursuit denoising, conference, ismp, non-convex optimization, sparsity | ISMP is over now and I’m already home. I do not have many things to report on from the last day. This is not due the lower quality of the talks but due to the fact that I was a little bit exhausted, as usual at the end of a five-day conference. However, I collect a few things for the record: • In the morning I visited the semi-planary by Xiaojun Chenon non-convex and non-smooth minimization with smoothing methods. Not surprisingly, she treated the problem $\displaystyle \min_x f(x) + \|x\|_p^p$ with convex and smooth ${f:{\mathbb R}^n\rightarrow{\mathbb R}}$ and ${0<p<1}$. She proposed and analyzed smoothing methods, that is, to smooth the problem a bit to obtain a Lipschitz-continuous objective function ${\phi_\epsilon}$, minimizing this and then gradually decreasing ${\epsilon}$. This works, as she showed. If I remember correctly, she also treated “iteratively reweighted least squares” as I described in my previous post. Unfortunately, she did not include the generalized forward-backward methods based on ${\text{prox}}$-functions for non-convex functions. Kristian and I pursued this approach in our paper Minimization of non-smooth, non-convex functionals by iterative thresholding and some special features of our analysis include: • A condition which excludes some (but not all) local minimizers from being global. • An algorithm which avoids this non-global minimizers by carefully adjusting the steplength of the method. • A result that the number of local minimizers is still finite, even if the problem is posed in ${\ell^2({\mathbb N})}$ and not in ${{\mathbb R}^n}$. Most of our results hold true, if the ${p}$-quasi-norm is replaced by functions of the form $\displaystyle \sum_n \phi_n(|x_n|)$ with special non-convex ${\phi}$, namely fulfilling a list of assumptions like • ${\phi'(x) \rightarrow \infty}$ for ${x\rightarrow 0}$ (infinite slope at ${0}$) and ${\phi(x)\rightarrow\infty}$ for ${x\rightarrow\infty}$ (mild coercivity), • ${\phi'}$ strictly convex on ${]0,\infty[}$ and ${\phi'(x)/x\rightarrow 0}$ for ${x\rightarrow\infty}$, • for each ${b>0}$ there is ${a>0}$ such that for ${x<b}$ it holds that ${\phi(x)>ax^2}$, and • local integrability of some section of ${\partial\phi'(x) x}$. As one easily sees, ${p}$-quasi-norms fulfill the assumptions and some other interesting functions as well (e.g. some with very steep slope at ${0}$ like ${x\mapsto \log(x^{1/3}+1)}$). • Jorge Nocedalgave a talk on second-order methods for non-smooth problems and his main example was a functional like $\displaystyle \min_x f(x) + \|x\|_1$ with a convex and smooth ${f}$, but different from Xiaojun Chen, he only considered the ${1}$-norm. His talked is among the best planary talks I have ever attended and it was a great pleasure to listen to him. He carefully explained things and put them in perspective. In the case he skipped slides, he made me feel that I either did not miss an important thing, or understood them even though he didn’t show them He argued that it is not necessarily more expensive to use second order information in contrast to first order methods. Indeed, the ${1}$-norm can be used to reduce the number of degrees of freedom for a second order step. What was pretty interesting is, that he advocated semismooth Newton methods for this problem. Roland and I pursued this approach some time ago in our paper A Semismooth Newton Method for Tikhonov Functionals with Sparsity Constraints and, if I remember correctly (my notes are not complete at this point), his family of methods included our ssn-method. The method Roland and I proposed worked amazingly well in the cases in which it converged but the method suffered from non-global convergence. We had some preliminary ideas for globalization, which we could not tune enough to retain the speed of the method, and abandoned the topic. Now, that the topic will most probably be revived by the community, I am looking forward to fresh ideas here. August 23, 2012 ## ISMP – inverse problems with uniform noise and TV does not preserve edges Posted by Dirk under Conference, Math, Regularization, Signal and image processing | Tags: conference, ill-posed problems, image processing, ismp, parameter choice, regularization, tikhonov | [5] Comments Today there are several things I could blog on. The first is the planary by Rich Baraniuk on Compressed Sensing. However, I don’t think that I could reflect the content in a way which would be helpful for a potential reader. Just for the record: If you have the chance to visit one of Rich’s talk: Do it! The second thing is the talk by Bernd Hofmann on source conditions, smoothness and variational inequalities and their use in regularization of inverse problems. However, this would be too technical for now and I just did not take enough notes to write a meaningful post. As a third thing I have the talk by Christian Clason on inverse problems with uniformly distributed noise. He argued that for uniform noise it is much better to use an ${L^\infty}$ discrepancy term instead of the usual ${L^2}$-one. He presented a path-following semismooth Newton method to solve the problem $\displaystyle \min_x \frac{1}{p}\|Kx-y^\delta\|_\infty^p + \frac{\alpha}{2}\|x\|_2^2$ and showed examples with different kinds of noise. Indeed the examples showed that ${L^\infty}$ works much better than ${L^2}$ here. But in fact it works even better, if the noise is not uniformly distributed but “impulsive” i.e. it attains bounds ${\pm\delta}$ almost everywhere. It seems to me that uniform noise would need a slightly different penalty but I don’t know which one – probably you do? Moreover, Christian presented the balancing principle to choose the regularization parameter (without knowledge about the noise level) and this was the first time I really got what it’s about. What one does here is, to choose ${\alpha}$ such that (for some ${\sigma>0}$ which only depends on ${K}$, but not on the noise) $\displaystyle \sigma\|Kx_\alpha^\delta-y^\delta\|_\infty = \frac{\alpha}{2}\|x_\alpha^\delta\|_2^2.$ The rational behind this is, that the left hand side is monotonically non-decreasing in ${\alpha}$, while the right hand side is monotonically non-increasing. Hence, there should be some ${\alpha}$ “in the middle” which make both somewhat equally large. Of course, we do neither want to “over-regularize” (which would usually “smooth too much”) nor to “under-regularize” (which would not eliminate noise). Hence, balancing seems to be a valid choice. From a practical point of view the balancing is also nice because one can use the fixed-point iteration $\displaystyle \alpha^{n+1} = 2\sigma\frac{\|Kx_{\alpha^n}^\delta - y^\delta\|_\infty}{\|x_{\alpha_n}^\delta\|_2^2}$ which converges in a few number of iterations. Then there was the talk by Esther Klann, but unfortunately, I was late so only heard the last half… Last but not least we have the talk by Christiane Pöschl. If you are interested in Total-Variation-Denoising (TV denoising), then you probably have heard many times that “TV denoising preserves edges” (have a look at the Wikipedia page – it claims this twice). What Christiane showed (in a work with Vicent Caselles and M. Novaga) that this claim is not true in general but only for very special cases. In case of characteristic functions, the only functions for which the TV minimizer has sharp edges are these so-called calibrated sets, introduced by Caselles et el. Building on earlier works by Caselles and co-workers she calculated exact minimizers for TV denoising in the case that the image consists of characteristic functions of two convex sets or of a single star shaped domain, that is, for a given set $B$ she calculated the solution of $\displaystyle \min_u\int (u - \chi_B)^2dx + \lambda \int|Du|.$ This is not is as easy as it may sound. Even for the minimizer for a single convex set one has to make some effort. She presented a nice connection of the shape of the obtained level-sets with the morphological operators of closing and opening. With the help of this link she derived a methodology to obtain the exact TV denoising minimizer for all parameters. I do not have the images right now but be assured that most of the time, the minimizers do not have sharp edges all over the place. Even for simple geometries (like two rectangles touching in a corner) strange things happen and only very few sharp edges appear. I’ll keep you posted in case the paper comes out (or appears as a preprint). Christiane has some nice images which make this much more clear: For two circles edges are preserved if they are far enough away from each other. If they are close, the area “in between” them is filled and, moreover, obey this fuzzy boundary. I remember myself seeing effects like this in the output of TV-solvers and thinking “well, it seems that the algorithm is either not good or not converged yet – TV should output sharp edges!”. For a star-shaped shape (well, actually a star) the output looks like this. The corners are not only rounded but also blurred and this is true both for the “outer” corners and the “inner” corners. So, if you have any TV-minimizing code, go ahead and check if your code actually does the right things on images like this! Moreover, I would love to see similar results for more complicated extensions of TV like Total Generalized Variation, I treated here. August 20, 2012 ## ISMP first day Posted by Dirk under Conference, Math, Signal and image processing, Sparsity | Tags: Basis pursuit denoising, conference, Inverse problems, ismp, regularization, sparsity | The scientific program at ISMP started today and I planned to write a small personal summary of each day. However, it is a very intense meeting. Lot’s of excellent talks, lot’s of people to meet and little spare time. So I’m afraid that I have to deviate from my plan a little bit. Instead of a summary of every day I just pick out a few events. I remark that these picks do not reflect quality, significance or something like this in any way. I just pick things for which I have something to record for personal reasons. My day started after the first plenary which the session Testing environments for machine learning and compressed sensing in which my own talk was located. The session started with the talk by Michael Friedlander of the SPOT toolbox. Haven’t heard of SPOT yet? Take a look! In a nutshell its a toolbox which turns MATLAB into “OPLAB”, i.e. it allows to treat abstract linear operators like matrices. By the way, the code is on github. The second talk was by Katya Scheinberg (who is giving a semi-planary talk on derivative free optimization at the moment…). She talked about speeding up FISTA by cleverly adjusting step-sizes and over-relaxation parameters and generalizing these ideas to other methods like alternating direction methods. Notably, she used the “SPEAR test instances” from our project homepage! (And credited them as “surprisingly hard sparsity problems”.) My own talk was the third and last one in that session. I talked about the issue of constructing test instance for Basis Pursuit Denoising. I argued that the naive approach (which takes a matrix ${A}$, a right hand side ${b}$ and a parameter ${\lambda}$ and let some great solver run for a while to obtain a solution ${x^*}$) may suffer from “trusted method bias”. I proposed to use “reverse instance construction” which is: First choose ${A}$, ${\lambda}$ and the solution ${x^*}$ and the construct the right hand side ${b}$ (I blogged on this before here). Last but not least, I’d like to mention the talk by Thomas Pock: He talked about parameter selection on variational models (think of the regularization parameter in Tikhonov, for example). In a paper with Karl Kunisch titled A bilevel optimization approach for parameter learning in variational models they formulated this as a bi-level optimization problem. An approach which seemed to have been overdue! Although they treat somehow simple inverse problems (well, denoising) (but with not so easy regularizers) it is a promising first step in this direction. July 26, 2012 ## Estimating unknown sparsity, sparse signal processing and proximal Newton-type methods Posted by Dirk under Math, Optimization, Signal and image processing, Sparsity | Tags: abstract, arxiv, compressed sensing, papers, sparsity | In this post I just collect a few papers that caught my attention in the last moth. I begin with Estimating Unknown Sparsity in Compressed Sensing by Miles E. Lopes. The abstract reads Within the framework of compressed sensing, many theoretical guarantees for signal reconstruction require that the number of linear measurements ${n}$ exceed the sparsity ${\|x\|_0}$ of the unknown signal ${x\in\mathbb{R}^p}$. However, if the sparsity ${\|x\|_0}$ is unknown, the choice of ${n}$ remains problematic. This paper considers the problem of estimating the unknown degree of sparsity of ${x}$ with only a small number of linear measurements. Although we show that estimation of ${\|x\|_0}$ is generally intractable in this framework, we consider an alternative measure of sparsity ${s(x):=\frac{\|x\|_1^2}{\|x\|_2^2}}$, which is a sharp lower bound on ${\|x\|_0}$, and is more amenable to estimation. When ${x}$ is a non-negative vector, we propose a computationally efficient estimator ${\hat{s}(x)}$, and use non-asymptotic methods to bound the relative error of ${\hat{s}(x)}$ in terms of a finite number of measurements. Remarkably, the quality of estimation is dimension-free, which ensures that ${\hat{s}(x)}$ is well-suited to the high-dimensional regime where ${n<<p}$. These results also extend naturally to the problem of using linear measurements to estimate the rank of a positive semi-definite matrix, or the sparsity of a non-negative matrix. Finally, we show that if no structural assumption (such as non-negativity) is made on the signal ${x}$, then the quantity ${s(x)}$ cannot generally be estimated when ${n<<p}$. It’s a nice combination of the observation that the quotient ${s(x)}$ is a sharp lower bound for ${\|x\|_0}$ and that it is possible to estimate the one-norm and the two norm of a vector ${x}$ (with additional properties) from carefully chosen measurements. For a non-negative vector ${x}$ you just measure with the constant-one vector which (in a noisy environment) gives you an estimate of ${\|x\|_1}$. Similarly, measuring with Gaussian random vector you can obtain an estimate of ${\|x\|_2}$. Then there is the dissertation of Dustin Mixon on the arxiv: Sparse Signal Processing with Frame Theory which is well worth reading but too long to provide a short overview. Here is the abstract: Many emerging applications involve sparse signals, and their processing is a subject of active research. We desire a large class of sensing matrices which allow the user to discern important properties of the measured sparse signal. Of particular interest are matrices with the restricted isometry property (RIP). RIP matrices are known to enable efficient and stable reconstruction of sfficiently sparse signals, but the deterministic construction of such matrices has proven very dfficult. In this thesis, we discuss this matrix design problem in the context of a growing field of study known as frame theory. In the first two chapters, we build large families of equiangular tight frames and full spark frames, and we discuss their relationship to RIP matrices as well as their utility in other aspects of sparse signal processing. In Chapter 3, we pave the road to deterministic RIP matrices, evaluating various techniques to demonstrate RIP, and making interesting connections with graph theory and number theory. We conclude in Chapter 4 with a coherence-based alternative to RIP, which provides near-optimal probabilistic guarantees for various aspects of sparse signal processing while at the same time admitting a whole host of deterministic constructions. By the way, the thesis is dedicated “To all those who never dedicated a dissertation to themselves.” Further we have Proximal Newton-type Methods for Minimizing Convex Objective Functions in Composite Form by Jason D Lee, Yuekai Sun, Michael A. Saunders. This paper extends the well explored first order methods for problem of the type ${\min g(x) + h(x)}$ with Lipschitz-differentiable ${g}$ or simple ${\mathrm{prox}_h}$ to second order Newton-type methods. The abstract reads We consider minimizing convex objective functions in composite form $\displaystyle \min_{x\in\mathbb{R}^n} f(x) := g(x) + h(x)$ where ${g}$ is convex and twice-continuously differentiable and ${h:\mathbb{R}^n\rightarrow\mathbb{R}}$ is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. Many problems of relevance in high-dimensional statistics, machine learning, and signal processing can be formulated in composite form. We prove such methods are globally convergent to a minimizer and achieve quadratic rates of convergence in the vicinity of a unique minimizer. We also demonstrate the performance of such methods using problems of relevance in machine learning and high-dimensional statistics. With this post I say goodbye for a few weeks of holiday. April 2, 2012 ## The elastic-net as augmentation and super-resolution by semi-continuous compressed sensing Posted by Dirk under Math, Regularization, Signal and image processing, Sparsity | Tags: Basis pursuit, regularization, sparsity | [7] Comments Today I would like to comment on two arxiv-preprints I stumbled upon: 1. “Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm” – The Elastic Net rediscovered The paper “Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm” by Ming-Jun Lai and Wotao Yin is another contribution to a field which is (or was?) probably the fastest growing field in applied mathematics: Algorithms for convex problems with non-smooth ${\ell^1}$-like terms. The “mother problem” here is as follows: Consider a matrix ${A\in{\mathbb R}^{m\times n}}$, ${b\in{\mathbb R}^m}$ try to find a solution of $\displaystyle \min_{x\in{\mathbb R}^n}\|x\|_1\quad\text{s.t.}\quad Ax=b$ or, for ${\sigma>0}$ $\displaystyle \min_{x\in{\mathbb R}^n}\|x\|_1\quad\text{s.t.}\quad \|Ax-b\|\leq\sigma$ which appeared here on this blog previously. Although this is a convex problem and even has a reformulation as linear program, some instances of this problem are notoriously hard to solve and gained a lot of attention (because their applicability in sparse recovery and compressed sensing). Very roughly speaking, a part of its hardness comes from the fact that the problem is neither smooth nor strictly convex. The contribution of Lai and Yin is that they analyze a slight perturbation of the problem which makes its solution much easier: They add another term in the objective; for ${\alpha>0}$ they consider $\displaystyle \min_{x\in{\mathbb R}^n}\|x\|_1 + \frac{1}{2\alpha}\|x\|_2^2\quad\text{s.t.}\quad Ax=b$ or $\displaystyle \min_{x\in{\mathbb R}^n}\|x\|_1 + \frac{1}{2\alpha}\|x\|_2^2\quad\text{s.t.}\quad \|Ax-b\|\leq\sigma.$ This perturbation does not make the problem smooth but renders it strongly convex (which usually makes the dual more smooth). It turns out that this perturbation makes life with this problem (and related ones) much easier – recovery guarantees still exists and algorithms behave better. I think it is important to note that the “augmentation” of the ${\ell^1}$ objective with an additional squared ${\ell^2}$-term goes back to Zou and Hastie from the statistics community. There, the motivation was as follows: They observed that the pure ${\ell^1}$ objective tends to “overpromote” sparsity in the sense that if there are two columns in ${A}$ which are almost equally good in explaining some component of ${b}$ then only one of them is used. The “augmented problem”, however, tends to use both of them. They coined the method as “elastic net” (for reasons which I never really got). I also worked on elastic-net problems for problems in the form $\displaystyle \min_x \frac{1}{2}\|Ax-b\|^2 + \alpha\|x\|_1 + \frac{\beta}{2}\|x\|_2^2$ in this paper (doi-link). Here it also turns out that the problem gets much easier algorithmically. I found it very convenient to rewrite the elastic-net problem as $\displaystyle \min_x \frac{1}{2}\|\begin{bmatrix}A\\ \sqrt{\beta} I\end{bmatrix}x-\begin{bmatrix}b\\ 0\end{bmatrix}\|^2 + \alpha\|x\|_1$ which turns the elastic-net problem into just another ${\ell^1}$-penalized problem with a special matrix and right hand side. Quite convenient for analysis and also somehow algorithmically. 2. Towards a Mathematical Theory of Super-Resolution The second preprint is “Towards a Mathematical Theory of Super-Resolution” by Emmanuel Candes and Carlos Fernandez-Granda. The idea of super-resolution seems to pretty old and, very roughly speaking, is to extract a higher resolution of a measured quantity (e.g. an image) than the measured data allows. Of course, in this formulation this is impossible. But often one can gain something by additional knowledge of the image. Basically, this also is the idea behind compressed sensing and hence, it does not come as a surprise that the results in compressed sensing are used to try to explain when super-resolution is possible. The paper by Candes and Fernandez-Granada seems to be pretty close in spirit to Exact Reconstruction using Support Pursuit on which I blogged earlier. They model the sparse signal as a Radon measure, especially as a sum of Diracs. However, different from the support-pursuit-paper they use complex exponentials (in contrast to real polynomials). Their reconstruction method is basically the same as support pursuit: The try to solve $\displaystyle \min_{x\in\mathcal{M}} \|x\|\quad\text{s.t.}\quad Fx=y, \ \ \ \ \ (1)$ i.e. they minimize over the set of Radon measures ${\mathcal{M}}$ under the constraint that certain measurements ${Fx\in{\mathbb R}^n}$ result in certain given values ${y}$. Moreover, they make a thorough analysis of what is “reconstructable” by their ansatz and obtain a lower bound on the distance of two Diracs (in other words, a lower bound in the Prokhorov distance). I have to admit that I do not share one of their claims from the abstract: “We show that one can super-resolve these point sources with infinite precision—i.e. recover the exact locations and amplitudes—by solving a simple convex program.” My point is that I can not see to what extend the problem (1) is a simple one. Well, it is convex, but it does not seem to be simple. I want to add that the idea of “continuous sparse modelling” in the space of signed measures is very appealing to me and appeared first in Inverse problems in spaces of measures by Kristian Bredies and Hanna Pikkarainen. March 12, 2012 ## Some Sloan-Fellowships 2012 related to sparsity and signal processing Posted by Dirk under Math, Signal and image processing, Sparsity Recently the recipients of Sloan Fellowships for 2012 has been announced. This is a kind grant/price awarded to young scientists, usually people who are assistant professors on the tenure track, from the U.S. and Canada. While the actual award is not exorbitant (but still large) the Sloan Fellowships seem to be a good indicator for further success in the career and, more importantly, for awaited breakthroughs by the fellows. This year there are 20 mathematicians among the recipients and it appeared that I knew three of them: • Rachel Ward (UTexas at Austin), a student of Ingrid Daubechies, has written a very nice PhD Thesis “Freedom through imperfection” on signal processing based on redundancy. Among several interesting works she has a very recent preprint Stable image reconstruction using total variation minimization which I plan to cover in a future post (according to the abstract the paper provides rigorous theory for exact recovery with discrete total variation which is cool). • Ben Recht (University of Wisconsin, Madison) also works in the field of signal processing (among other fields) and is especially known for his work on exact matrix completion with compressed sensing techniques (together with Emmanuel Candes); I also liked his paper “The Convex Geometry of Linear Inverse Problems” (with Venkat Chandrasekaran, Pablo A. Parrilo, and Alan Willsky) with elaborated on the generalization of ${\ell^1}$-minimization and nuclear norm minimization. • Greg Blekherman (Georgia Tech) works (among other things) on algebraic geometry and especially on the problem which non-negative polynomials (in more the one variable) can be written as sums of squares of polynomials, a question which is related to one of Hilbert’s 23 problems, namely Hilbert’s seventeenth problem. An interesting thing about non-negative polynomials and sums of squares is that: 1. Checking if a polynomial is non-negative is NP-hard. 2. Checking is a polynomial is a sum of squares can be done fast. 3. It seem that “most” non-negative polynomials are actually sums of squares but it unclear “how many of them”, see Greg’s chapter of the forthcoming book “Semidefinite Optimization and Convex Algebraic Geometry” here. Congratulations! January 19, 2012 ## Sparse recovery of multidimensional signal with Kronecker products Posted by Dirk under Math, Signal and image processing, Sparsity | Tags: sparsity | [2] Comments Today I write about sparse recovery of “multidimensional” signal. With “multidimensional” I mean something like this: A one-dimensional signal is a vector ${x\in{\mathbb R}^n}$ while a two-dimensional signal is a matrix ${x\in{\mathbb R}^{n_1\times n_2}}$. Similarly, ${d}$-dimensional signal is ${x\in{\mathbb R}^{n_1\times\cdots\times n_d}}$. Of course, images as two-dimensional signals, come time mind. Moreover, movies are three-dimensional, a hyperspectral 2D image (which has a whole spectrum attached to any pixel) is also three-dimensional), and time-dependent volume-data is four-dimensional. Multidimensional data is often a challenge due the large amount of data. While the size of the signals is usually not the problem it is more the size of the measurement matrices. In the context of compressed sensing or sparse recovery the signal is measured with a linear operator, i.e. one applies a number ${m}$ of linear functionals to the signal. In the ${d}$-dimensional case this can be encoded as a matrix ${A\in {\mathbb R}^{m\times \prod_1^d n_i}}$ and this is where the trouble with the data comes in: If you have megapixel image (which is still quite small) the matrix has a million of columns and if you have a dense matrix, storage becomes an issue. One approach (which is indeed quite old) to tackle this problem is, to consider special measurement matrices: If the signal has a sparse structure is every slice, i.e. every vectors of the form ${x(i_1,\dots,i_{k-1},:,i_{k+1},\dots,i_d)}$ where we fix all but the ${k}$-th component, then the Kronecker product of measurement matrices for each dimension is the right thing. The Kronecker product of two matrices ${A\in{\mathbb R}^{m\times n}}$ and ${B\in{\mathbb R}^{k\times j}}$ is the ${mk\times nj}$ matrix $\displaystyle A\otimes B = \begin{bmatrix} a_{11}B & \dots & a_{1n}B\\ \vdots & & \vdots\\ a_{n1}B & \dots & a_{nn}B \end{bmatrix}.$ This has a lot to do with the tensor product and you should read the Wikipedia entry. Moreover, it is numerically advantageous not to build the Kronecker product of dense matrices if you only want to apply it to a given signal. To see this, we introduce the vectorization operator ${\text{vec}:{\mathbb R}^{m\times n}\rightarrow{\mathbb R}^{nm}}$ which takes a matrix ${X}$ and stacks its columns into a tall column vector. For matrices ${A}$ and ${B}$ (of fitting sizes) it holds that $\displaystyle (B^T\otimes A)\text{vec}(X) = \text{vec}(AXB).$ So, multiplying ${X}$ from the left and from the right gives the application of the Kronecker product. The use of Kronecker products in numerical linear algebra is fairly old (for example they are helpful for multidimensional finite difference schemes where you can build Kronecker products of sparse difference operators in respective dimensions). Recently, they have been discovered for compressed sensing and sparse recovery in these two papers: Sparse solutions to underdetermined Kronecker product systems by Sadegh Jokar and Volker Mehrmann and the more recent Kronecker Compressed Sensing by Marco Duarte and Rich Baraniuk. From these papers you can extract some interestingly simple and nice theorems: Theorem 1 For matrices ${A_1,\dots A_d}$ with restricted isometry constant ${\delta_K(A_1),\dots,\delta_K(A_d)}$ of order ${K}$ it holds that the restricted isometry constant of the Kronecker product fulfills $\displaystyle \max_i \delta_K(A_i) \leq \delta_K(A_1\otimes\cdots\otimes A_d) \leq \prod_1^d (1+\delta_K(A_i))-1.$ Basically, the RIP constant of a Kronecker product is not better than the worst one but still not too large. Theorem 2 For matrices ${A_1,\dots, A_d}$ with columns normalized to one, it hold that the spark of their Kronecker product fulfills $\displaystyle \text{spark}(A\otimes \dots\otimes A_d) = = \min_i\text{spark}(A_i).$ Theorem 3 For matrices ${A_1,\dots, A_d}$ with columns normalized to one, it hold that the mutual coherence of their Kronecker product fulfills $\displaystyle \mu(A_1\otimes\dots\otimes A_d) = \max_i \mu(A_i).$ ### Like this: December 13, 2011 ## Identification of time varying channels Posted by Dirk under Math, Signal and image processing | Tags: signal processing, time varying channels | In a recent post I wrote about time varying channels. These were operators, described by the spreading function ${s:{\mathbb R}\times{\mathbb R}\rightarrow{\mathbb C}}$, mapping an input signal ${x}$ to an output signal ${y}$ via $\displaystyle y(t) = H_s x(t) = \int_{{\mathbb R}}\int_{{\mathbb R}} s(\tau,\nu)x(t-\tau)\mathrm{e}^{\mathrm{i}\nu t}d\nu d\tau.$ 1. The problem statement In this post I write about the problem of identifying the channels from input-output measurements. More precisely, the question is as follows: Is it possible to choose a single input signal such that the corresponding output signal allows the computation of the spreading function? At first sight this seems hopeless: The degrees of freedom we have is a single function ${x:{\mathbb R}\rightarrow{\mathbb C}}$ and we look for a function ${s:{\mathbb R}\times{\mathbb R}\rightarrow{\mathbb C}}$. However, additional assumptions on ${s}$ may help and indeed there is the following identification theorem due to Kailath: Theorem 1 (Kailath) The channel ${H_s}$ can be identified if the support of the spreading function ${s}$ is contained in a rectangle with area smaller than ${2\pi}$. Probably this looks a bit weird: The size of the support shall allow for identification? However, after turning the problem a bit around one sees that there is an intuitive explanation for this theorem which also explains the constant ${2\pi}$. 2. Translation to integral operators The first step is, to translate the problem into one with integral operators of the form $\displaystyle F_k x(t) = \int_{\mathbb R} k(t,\tau)x(\tau)d\tau.$ We observe that ${k}$ is linked to ${s}$ via $\displaystyle k(t,\tau) = \int s(t-\tau,\nu)e^{i\nu t}d\nu = (2\pi)^{1/2}\mathcal{F}_2^{-1}(s(t-\tau,\cdot))(t). \ \ \ \ \ (1)$ and hence, identifying ${s}$ is equivalent to identifying ${k}$ (by the way: I use the same normalization of the Fourier transform as in this post). From my point of view, the formulation of the identification problem is a bit clearer in the form of the integral operator ${F_k}$. It very much resembles “matrix-vector multiplication”: First you multiply ${k(t,\tau)}$ with you input signal ${x}$ in ${\tau}$ (forming “pointwise products of the rows and the input vector” as in matrix-vector multiplication) and then integrate with respect to ${t}$ (“summing up” the result). 3. Sending Diracs through the channel Now, let’s see, if we can identify ${k}$ from a single input-output measurement. We start, and send a single Dirac impulse through the channel: ${x = \delta_0}$. This formally gives $\displaystyle y(t) = \int_{\mathbb R} k(t,\tau)\delta_0(\tau)d\tau = k(t,0).$ Ok, this is not too much. From the knowledge of ${y}$ we can directly infer the values ${k(t,0)}$ but that’s it – nothing more. But we could also send a Dirac at a different position ${\tau_0}$, ${x = \delta_{\tau_0}}$ and obtain $\displaystyle y(t) = \int_{\mathbb R} k(t,\tau)\delta_{\tau_0}(\tau)d\tau = k(t,\tau_0).$ Ok, different Diracs give us information about ${k}$ for all ${t}$ but only for a single ${\tau_0}$. Let’s send a whole spike train (or Dirac comb) of “width” ${c>0}$: ${x = \Delta_c = \sum_{n\in{\mathbb Z}} \delta_{cn}}$. We obtain $\displaystyle y(t) = \int_{\mathbb R} k(t,\tau)\sum_{n\in{\mathbb Z}}\delta_{nc}(\tau)d\tau = \sum_{n\in{\mathbb Z}}k(t,cn).$ Oh – this does not reveal a single value of ${k}$ but “aggregated” values… 4. Choosing the right spike train Now let’s translate the one condition on ${s}$ we have to a condition on ${k}$: The support of ${s}$ is contained in a rectangle with area less then ${2\pi}$. Let’s assume that this rectangle in centered around zero (which is no loss of generality) and denote it by ${[-a/2,a/2]\times [-b/2,b/2]}$, i.e. ${s(\tau,\nu) = 0}$ if ${|\tau|>a/2}$ or ${|\nu|>b/2}$ (and of course ${ab<2\pi}$). From (1) we conclude that $\displaystyle k(t,\tau) = 0\ \text{ for }\ |t-\tau|>a/2.$ In the ${(t,\tau)}$-plane, the support looks like this: If we now send this spike train in the channel, we can visualize the output as follows: We observe that in this case we can infer the values of ${k(t,nc)}$ at some points ${t}$ but at other points we have an overlap and only observe a sum of ${k(t,nc)}$ and ${k(t,(n+1)c)}$. But if we make ${c}$ large enough, namely larger than ${a}$, we identify the values of ${k(t,nc)}$ exactly in the output signal! 5. Is that enough? Ok, now we know something about ${k}$ but how can we infer the rest of the values? Don’t forget, that we know that ${ab < 2\pi}$ and that we know that ${s(\tau,\nu)}$ is supported in ${[-a/2,a/2]\times[-b/2,b/2]}$. Up to now we only used the value of ${a}$. How does the support limitation if ${\nu}$ translates to ${k}$? We look again at (1) and see: The function ${\tilde k_t:\tau\mapsto k(t,t-\tau)}$ is bandlimited with bandwidth ${b/2}$ for every ${t}$. And what do we know about these functions ${\tilde k_t}$? We know the samples ${k(t,nc) = \tilde k_t(t-nc)}$. In other words, we have samples of ${\tilde k_t}$ with the sampling rate ${c}$. But there is the famous Nyquist-Shannon sampling theorem: Theorem 2 (Nyquist-Shannon) A function ${f:{\mathbb R}\rightarrow{\mathbb C}}$ which is bandlimited with bandwidth ${B}$ is totally determined by its discrete time samples ${f(n\pi/B)}$, ${n\in{\mathbb Z}}$ and it holds that $\displaystyle f(t) = \sum_{n\in{\mathbb Z}} f(n\pi/N)\textup{sinc}\Big(\tfrac{B}{\pi}(x - \tfrac{n\pi}{B})\Big).$ We are happy with the first assertion: The samples totally determine the function if they are dense enough. In our case the bandwidth of ${\tilde k_t}$ is ${b/2}$ and hence, we need the sampling rate ${2\pi/b}$. What we have, are samples with rate ${c}$. We collect our conditions on ${c}$: We need ${c}$ • larger than ${a}$ to separate the values of ${k(t,nc)}$ in the output. • smaller than ${2\pi/b}$ to determine the full functions ${\tilde k_t(\tau) = k(t,t-\tau)}$. Both together say $\displaystyle a < c < \frac{2\pi}{b}.$ Such a ${c}$ exists, if ${ab < 2\pi}$ and we have proven Theorem 1. 6. Concluding remarks The proof of Kailath’s theorem reveal the role of the constraint ${ab <2\pi}$: We want ${a}$ not to be too large to be able to somehow separate the values of ${k(t,nc)}$ in the output measurement and we need ${b}$ not too large to ensure that we can interpolate the values ${k(t,nc)}$ exactly. However, a severe drawback of this results is that one needs to know ${a}$ to construct the “sensing signal” which was the spike train ${\Delta_c}$. Moreover, this sensing signal has itself an infinite bandwidth which is practically not desirable. It seems to be an open question whether there are more practical sensing signals. There is more known about sensing of time varying channels: The support of ${s}$ does not have to be in a rectangle, it is enough if the measure of the support is smaller than ${2\pi}$ (a result usually attributed to Bello). Moreover, there are converse results which say that linear and stable identification is only possible under this restriction on the support size (see the work of Götz Pfander and coauthors). ### Like this: November 28, 2011 ## Time varying channels, pseudo differential operators and the like Posted by Dirk under Math, Signal and image processing | Tags: signal processing, time varying channels | 1 Comment I stumbled upon the notion of “time varying channels” in signal processing and after reading a bit I realized that these are really interesting objects and appear in many different parts of mathematics. In this post I collect of few of their realizations and relations. In signal processing a “time varying channel” is a mapping which maps a signal ${x:{\mathbb R}^d\rightarrow {\mathbb C}}$ to an output ${y:{\mathbb R}^d\rightarrow{\mathbb C}}$ via $\displaystyle y(t) = \int_{{\mathbb R}^d}\int_{{\mathbb R}^d} s(\tau,\nu)x(t-\tau)\mathrm{e}^{\mathrm{i}\nu t}d\nu d\tau.$ Before we explain the name “time varying channel” we fix the notation for the Fourier transform I am going to use in this post: Definition 1 For ${f:{\mathbb R}^d\rightarrow{\mathbb C}}$ we define the Fourier transform by $\displaystyle \mathcal{F}(f)(\omega) = \hat f(\omega) = (2\pi)^{-d/2}\int f(t)\mathrm{e}^{-\mathrm{i}\omega t}dt$ and denote the inverse Fourier transform by ${\mathcal{F}^{-1}f}$ or ${\check f}$. For a function ${f:{\mathbb R}^d\times{\mathbb R}^d\rightarrow{\mathbb C}}$ we denote with ${\mathcal{F}_1}$ and ${\mathcal{F}_2}$ the Fourier transform with respect to the first and second ${{\mathbb R}^d}$-component, respectively. Remark 1 In all what follows we do formal calculations with integral not caring about integrability. All calculation are justified in the case of Schwartz-functions and often hold in a much broader context of tempered distributions (this for example happens if the integrals represent Fourier transforms of functions). The name “time varying channel” can be explained as follows: Consider a pure frequency as input: ${x(t) = \mathrm{e}^{-\mathrm{i}\omega t}}$. A usual linear channel gives as output a damped signal and the damping depends on the frequency ${\omega}$: ${y(t) = h(\omega) \mathrm{e}^{-\mathrm{i}\omega t}}$. If we send the pure frequency in our time varying channel we get $\displaystyle \begin{array}{rcl} y(t) & = &\int\int s(\tau,\nu) \mathrm{e}^{-\mathrm{i}\omega(t-\tau)}e^{\mathrm{i}\nu t}d\nu dt\\ & =& \int\int s(\tau,\nu) \mathrm{e}^{\mathrm{i}(\omega\tau +\nu t)}d\nu dt\, \mathrm{e}^{-\mathrm{i}\omega t}\\ & =& (2\pi)^d \hat s(-\omega,-t) \mathrm{e}^{-\mathrm{i}\omega t}. \end{array}$ Hence, the time varying channel also damps the pure frequencies but with a time dependent factor. Let’s start quite far away from signal processing: 1. Pseudo-differential operators A general linear differential operator of order ${N}$ on functions on ${{\mathbb R}^d}$ is defined with multiindex notation as $\displaystyle Af(t) = \sum_{|\alpha|\leq N} \sigma_\alpha(t) D^\alpha f(t)$ with coefficient functions ${\sigma_\alpha}$. Using Fourier inversion we get $\displaystyle D^\alpha f(t) - (2\pi)^{-d/2} \int \hat f(\omega) (\mathrm{i} \omega)\mathrm{e}^{\mathrm{i}\omega t}d\omega$ and hence $\displaystyle \begin{array}{rcl} Af(t) & =& \int \Big((2\pi)^{-d/2} \sum_{|\alpha|\leq N} \sigma_\alpha(t)(\mathrm{i} \omega)^\alpha) \hat f(\omega)\mathrm{e}^{\mathrm{i}\omega t}d\omega\\ &=& \int \sigma(\omega,t) \hat f(\omega)\mathrm{e}^{\mathrm{i}\omega t}d\omega \\ & = & K_\sigma f(t). \end{array}$ For a general function (usually obeying some restrictions) ${\sigma:{\mathbb R}^d\times{\mathbb R}^d\rightarrow{\mathbb C}}$ we call the corresponding ${K_\sigma}$ a pseudo-differential operator with symbol ${\sigma}$. 2. As integral operators By integral operator I mean something like $\displaystyle F_k f(t) = \int k(t,\tau)f(\tau)d\tau.$ We plug the definition of the Fourier transform into ${K_\sigma}$ and obtain $\displaystyle \begin{array}{rcl} K_\sigma f(t) &=& \int \sigma(\omega,t) (2\pi)^{-d/2} \int f(\tau)\mathrm{e}^{-\mathrm{i} \omega \tau}d\tau \mathrm{e}^{\mathrm{i} \omega t}d\omega\\ & = & \int \underbrace{(2\pi)^{-d/2} \int \sigma(\omega,t)\mathrm{e}^{-\mathrm{i}\omega(\tau-t)}d\omega}_{=k(t,\tau)} f(\tau)d\tau\\ &=& \int k(t,\tau)f(\tau)d\tau. \end{array}$ Using the Fourier transform we can express the relation between ${\sigma}$ and ${k}$ as $\displaystyle k(t,\tau) = (2\pi)^{-d/2}\int \sigma(\omega,\tau)\mathrm{e}^{-\mathrm{i}\omega(\tau-t)}d\omega = \mathcal{F}_1(\sigma(\cdot,t))(\tau-t). \ \ \ \ \ (1)$ 3. As “time varying convolution” The convolution of two functions ${f}$ and ${g}$ is defined as $\displaystyle (f*g)(t) = \int f(\tau) g(t-\tau)d\tau$ and we write “the convolution with ${g}$” as an operator ${C_g f = f * g}$. Defining $\displaystyle h_t(\tau) = (2\pi)^{-d/2}\int \sigma(\omega,t)\mathrm{e}^{\mathrm{i}\omega \tau}d\omega$ we deduce from (1) $\displaystyle K_\sigma f(t) = \int f(\tau) h_t(t-\tau)d\tau= (f * h_t)(t) = C_{h_t}f(t).$ 4. As superposition of time-frequency shifts Using that iterated Fourier transforms with respect to components give the Fourier transform, i.e. ${\mathcal{F}_2\mathcal{F}_1 = \mathcal{F}}$, we obtain $\displaystyle \mathcal{F}_1\sigma = \mathcal{F}_2^{-1}\mathcal{F}_2\mathcal{F}_1\sigma = \mathcal{F}_2^{-1}\hat\sigma.$ From (1) we get $\displaystyle \begin{array}{rcl} k(t,\tau) &=& \mathcal{F}_2^{-1}\hat\sigma(\tau-t,t)\\ &=& (2\pi)^{-d/2} \int\hat\sigma(\tau-t,\nu)\mathrm{e}^{\mathrm{i} t\nu}d\nu. \end{array}$ Before plugging this into ${K_\sigma}$ we define time shifts ${T_u}$ and frequency shifts (or modulations) ${M_\nu}$ as $\displaystyle T_u f(t) = f(t-u),\quad M_\nu f(t) = \mathrm{e}^{\mathrm{i} t\nu}.$ With this we get $\displaystyle \begin{array}{rcl} K_\sigma f(t) &=& (2\pi)^{-d/2}\int\int \hat\sigma(\tau-t,\nu)\mathrm{e}^{\mathrm{i} t\nu}f(\tau)d\nu d\tau\\ &=& (2\pi)^{-d/2}\int\int \hat\sigma(u,\nu)\mathrm{e}^{\mathrm{i} t\nu}f(t+u)d\nu du\\ &=& (2\pi)^{-d/2}\int\int\hat\sigma(u,\nu) M_\nu T_{-u} f(t) d\nu du\\ &=& \int\int w(u,\nu) M_\nu T_{-u} f(t)d\nu du = S_wf(t). \end{array}$ Hence, ${K_\sigma}$ is also a weighted superposition of ${M_\nu T_{-u} f}$ (time-frequency shifts) with weight ${(2\pi)^{-d/2} \hat\sigma(u,\nu)}$. 5. Back to time varying channels Simple substitution brings us back the situation of a time varying channel $\displaystyle \begin{array}{rcl} K_\sigma f(t) &=& (2\pi)^{-d/2} \int\int \hat\sigma(u,\nu)\mathrm{e}^{\mathrm{i} t\nu}f(t+u)d\nu du\\ &=& \int \int s(\tau,\nu) f(t-\tau) \mathrm{e}^{\mathrm{i} t\nu} d\nu d\tau\\ &=& H_s f(t) \end{array}$ with ${s(\tau,\nu) = (2\pi)^{-d/2} \hat\sigma(-\tau,\nu)}$. 6. As superposition of product-convolution operators Finally, I’d like to illustrate that this kind of operators can be seen as superposition of product convolution operators. Introducing product operators ${P_g f(t) = g(t)f(t)}$ we define a product-convolution operator as a convolution with a function ${h}$ followed by a multiplication with another function ${g}$: ${P_g C_h f}$. To express ${K_\sigma}$ with product-convolution operators we choose an orthonormal basis which consists of tensor-products of functions ${(\phi_n\otimes\psi_k)}$ and develop ${\sigma}$ into this basis as $\displaystyle \sigma(\omega,t) = \sum_{n,k} a_{n,k} \phi_n(\omega)\psi_k(t).$ Then $\displaystyle \begin{array}{rcl} K_\sigma f(t) &=& \sum_{n,k} a_{n,k}\psi_k(t) \int \phi_n(\omega) \hat f(\omega) \mathrm{e}^{\mathrm{i} \omega t}d\omega\\ & = & \sum_{n,k} a_{n,k}\psi_k(t) (2\pi)^{d/2}\mathcal{F}^{-1}(\phi_n\, \hat f)(t)\\ &=& \sum_{n,k} a_{n,k}\psi_k(t) (\check \phi_n * f)(t) \end{array}$ and we obtain $\displaystyle K_\sigma f(t) = \sum_{n,k} a_{n,k}P_{\psi_k} C_{\check\phi_n} f (t).$ Remark 2 The integral operators of the form ${F_k}$ are indeed general objects as can be seen from the Schwartz kernel theorem. Every reasonable operator mapping Schwartz functions linearly onto the tempered distributions is itself a generalized integral operator. I tried to capture the various relation between the first five representations time varying channels in a diagram (where I went out of motivation before filling in all fields…): ### Like this: November 24, 2011 ## Minimum l^1-norm solutions are not always sparse Posted by Dirk under Math, Signal and image processing, Sparsity | Tags: Basis pursuit, sparsity | [15] Comments If you want to have a sparse solution to a linear system of equation and have heard of compressed sensing or sparse reconstruction than you probably know what to do: Get one of the many solvers for Basis Pursuit and be happy. Basis Pursuit was designed as a convex approximation of the generally intractable problem of finding the sparsest solution (that is, the solution with the smallest number of non-zero entries). By abuse of notation, we define for ${x\in\mathbb{R}^n}$ $\displaystyle \|x\|_0 = \#\{i\ : x_i\neq 0\}.$ (Because of ${\|x\|_0 = \lim_{p\rightarrow 0}\|x\|_p^p}$ some people prefer the, probably more correct but also more confusing, notation ${\|x\|_0^0}$…). Then, the sparsest solution of ${Ax=b}$ is the solution of $\displaystyle \min_x \|x\|_0,\quad \text{s.t.}\ Ax=b$ and Basis Pursuit replaces ${\|x\|_0}$ with “the closest convex proxy”, i.e. $\displaystyle \min_x \|x\|_1,\quad\text{s.t.}\ Ax=b.$ The good thing about Basis Pursuit suit is, that is really gives the sparsest solution under appropriate conditions as is widely known nowadays. Here I’d like to present two simple examples in which the Basis Pursuit solution is • not even close to the sparsest solution (by norm). • not sparse at all. 1. A small bad matrix We can build a bad matrix for Basis Pursuit, even in the case ${2\times 3}$: For a small ${\epsilon>0}$ define $\displaystyle A = \begin{bmatrix} \epsilon & 1 & 0\\ \epsilon & 0 & 1 \end{bmatrix}, \quad b = \begin{bmatrix} 1\\1 \end{bmatrix}.$ Of course, the sparsest solution is $\displaystyle x_0 = \begin{bmatrix} 1/\epsilon\\ 0\\ 0\end{bmatrix}$ while the solution of Basis Pursuit is $\displaystyle x_1 = \begin{bmatrix} 0\\1\\1 \end{bmatrix}.$ The summarize: For ${\epsilon<1/2}$ $\displaystyle \|x_0\|_0 = 1 < 2 = \|x_1\|_0,\quad \|x_0\|_1 = 1/\epsilon > 2 = \|x_1\|_1.$ (There is also a least squares solution that has three non-zero entries and a one-norm slightly larger than 2.) Granted, this matrix is stupid. Especially, its first column has a very small norm compared to the others. Ok, let’s construct a matrix with normalized columns. 2. A small bad matrix with normalized columns Fix an integer ${n}$ and a small ${\epsilon>0}$. We define a ${n\times(n+2)}$-matrix $\displaystyle \begin{bmatrix} 1+\epsilon/2 & -1+\epsilon/2 & 1 & 0 & \dots & \dots &0\\ -1+\epsilon/2 & 1+\epsilon/2 & 0 & 1 & \ddots & & 0\\ \epsilon/2 & \epsilon/2 & \vdots & \ddots& \ddots & \ddots& \vdots\\ \vdots & \vdots & \vdots & & \ddots & \ddots& 0\\ \epsilon/2 & \epsilon/2 & 0 & \dots& \dots& 0 & 1 \end{bmatrix}.$ Ok, the first two columns do not have norm 1 yet, so we normalize them by multiplying with the right constant $\displaystyle c = \frac{1}{\sqrt{2 + \tfrac{n\epsilon^2}{4}}}$ (which is close to ${1/\sqrt{2}}$) to get $\displaystyle A = \begin{bmatrix} c(1+\epsilon/2) & c(-1+\epsilon/2) & 1 & 0 & \dots & \dots &0\\ c(-1+\epsilon/2) & c(1+\epsilon/2) & 0 & 1 & \ddots & & 0\\ c\epsilon/2 & c\epsilon/2 & \vdots & \ddots& \ddots & \ddots& \vdots\\ \vdots & \vdots & \vdots & & \ddots & \ddots& 0\\ c\epsilon/2 & c\epsilon/2 & 0 & \dots& \dots& 0 & 1 \end{bmatrix}.$ Now we take the right hand side $\displaystyle b = \begin{bmatrix} 1\\\vdots\\1 \end{bmatrix}$ and see what solutions to ${Ax=b}$ are there. First, there is the least squares solution ${x_{\text{ls}} = A^\dagger b}$. This has only non-zero entries, the last ${n}$ entries are slightly smaller than ${1}$ and the first two are between ${0}$ and ${1}$, hence, ${\|x_{\text{ls}}\|_1 \approx n}$ (in fact, slightly larger). Second, there is a very sparse solution $\displaystyle x_0 = \frac{1}{\epsilon c} \begin{bmatrix} 1\\ 1\\ 0\\ \vdots\\ 0 \end{bmatrix}.$ This has two non-zero entries and a pretty large one-norm: ${\|x_0\|_1 = 2/(\epsilon c)}$. Third there is a solution with small one-norm: $\displaystyle x_1 = \begin{bmatrix} 0\\ 0\\ 1\\ \vdots\\ 1 \end{bmatrix}.$ We have ${n}$ non-zero entries and ${\|x_1\|_1 = n}$. You can check that this ${x_1}$ is also the unique Basis Pursuit solution (e.g. by observing that ${A^T[1,\dots,1]^T}$ is an element of ${\partial\|x_1\|_1}$ and that the first two entries in ${A^T[1,\dots,1]^T}$ are strictly smaller than 1 and positive – put differently, the vector ${[1,\dots,1]^T}$ is dual certificate for ${x_1}$). To summarize, for ${\epsilon < \sqrt{\frac{8}{n^2-n}}}$ it holds that $\displaystyle \|x_0\|_0 = 2 < n = \|x_1\|_0,\quad \|x_0\|_1 = 2/(c\epsilon) > n = \|x_1\|_1.$ The geometric idea behind this matrix is as follows: We take ${n}$ simple normalized columns (the identity part in ${A}$) which sum up to the right hand side ${b}$. Then we take two normalized vectors which are almost orthogonal to ${b}$ but have ${b}$ in their span (but one needs huge factors here to obtain ${b}$). Well, this matrix looks very artificial and indeed it’s constructed for one special purpose: To show that minimal ${\ell^1}$-norm solution are not always sparse (even when a sparse solution exists). It’s some kind of a hobby for me to construct instances for sparse reconstruction with extreme properties and I am thinking about a kind of “gallery” of these instances (probably extending the “gallery” command in Matlab). By the way: if you want to play around with this matrix, here is the code n = 100; epsilon = sqrt(8/(n^2-n))+0.1; c = 1/sqrt(2+n*epsilon^2/4); A = zeros(n,n+2); A(1:2,1:2) = ([1 -1;-1,1]+epsilon/2)*c; A(3:n,1:2) = epsilon/2*c; A(1:n,3:n+2) = eye(n); b = ones(n,1);
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 340, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372806549072266, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/322085/differentiation-of-logarithmic-functions
# Differentiation of logarithmic functions I'm going over my homework from calc, and am having some trouble with a few questions. It seems as if I'm just not understanding how to solve the problem: 1. $$f(x) = \sqrt[5]{\ln x}$$ 2. $$f(x) = \sqrt{x}\ln x$$ 3. $$f(x)= \ln \dfrac{({2x+1})^3}{({3x-1})^4}$$ For the first one, I'm not even sure what to do. As for the second question I ended up with: $$f'(x) = \frac{1}{2x} \ln x + \frac{1}{x}\sqrt{x}$$ And for the last question I ended up with: $${(2x+1)^3\over(3x-1)^4} 3 \ln 2 - 4 \ln 3$$ (Sorry for my poor formatting, I wasn't able to use \frac and put the numerator and denominator to the power of x.) - I don't know what I was reading, but I've updated it to the correct version. There was no power of four, and it was the fifth root not the third. – user1327636 Mar 6 at 3:02 You need to go over the chain rule, and why did you put those weird-looking exponents to those $\,1'$s in (3)? – DonAntonio Mar 6 at 3:04 The post is updated, it was ment to be the entire num./den. to the power as it shows now. – user1327636 Mar 6 at 3:07 ## 2 Answers Hint: ($1$) Use the chain rule and remember $\sqrt[5]{y}=y^{1/5}$. ($2$) Product rule and use the fact that $\sqrt{y}=y^{1/2}$. For ($3$), use the chain rule and quotient rule. As an alternative to the quotient rule, you can use a property of logarithms that says $$\log\left(\frac{x^a}{y^b}\right)=\log(x^a)-\log(y^b)=a\log(x)-b\log(y).$$ - Nice hints (+1 earlier) – amWhy Mar 6 at 3:32 @amWhy: Thanks :) – Clayton Mar 6 at 3:34 ## Did you find this question interesting? Try our newsletter Sign up for our newsletter and get our top new questions delivered to your inbox (see an example). email address For the third one, first recall from prerequisite courses that $$f(x)= \ln \dfrac{({2x+1})^3}{({3x-1})^4} = 3\ln(2x+1) - 4\ln(3x-1).$$ Then differentiate, and remember to apply the chain rule as needed. - My text book says that the answer to that question is: $$\frac{6}{2x+1} - \frac{12}{3x-1}$$ EDIT: I realize how they got there, thank you. – user1327636 Mar 6 at 3:17 Nice, Michael +1. – amWhy Mar 6 at 3:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9577184915542603, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/265801/probability-distribution-of-number-of-times-required-to-roll-all-of-a-certain-se?answertab=votes
# Probability distribution of number of times required to roll all of a certain set of k numbers? Consider a 12-sided fair die. What is the distribution of the number T of rolls required to roll a 1, a 2, a 3, and a 4? Taking inspiration from the Coupon Collector's Problem, I believe that the expected number of rolls to achieve the goal would be $$E[T] = \sum\limits_{i=0}^3 \frac{12}{4-i} = 25$$ Similarly, the variance would be $$Var[T] = \sum\limits_{i=0}^3 \frac{1-\frac{4-i}{12}}{(\frac{4-i}{12})^2} = 180$$ But applying Chebyshev here does not yield very useful bounds. My question is therefore, how would you compute, for example, $P(T=16)$ or $P(T<30)$? Ideally this could be generalized to a set of k required numbers, not just 4 as in the example. - ## 2 Answers Personally I would call $P(n,b,k,j)$ the probability that after rolling $n$ dice with $b$ sides, $j$ of the target $k$ sides had been found, with the formula $$P(n+1,b,k,j)= \frac{b-k+j}{b} P(n,b,k,j) + \frac{k+1-j}{b} P(n,b,k,j-1)$$ starting from $P(0,b,k,j)=0$ for $j \not = 0$ and $P(0,b,k,0)=1$. Then $$\Pr(T \le t) = P(t,b,k,k)$$ and $$\Pr(T = t) = P(t,b,k,k)-P(t-1,b,k,k).$$ So in your example it is not difficult to calculate $\Pr(T = 16) \approx 0.0380722$ and $\Pr(T \le 30) \approx 0.7305294$. Your expected value of $T$ and variance appear to be correct. - Thanks for the response. A recursive definition is a good idea. I feel though that in your response, in the recurrence relation, one of the P's needs to have a j-1. – vote539 Dec 27 '12 at 13:31 @vote539: your final point is correct and I will edit – Henry Dec 27 '12 at 13:48 Thanks for the clarification! The results yielded by your solution match those yielded by a simulation of the dice rolls. A note to anyone who wants to implement this: the recursion stack grows very quickly, so it is necessary to use dynamic programming; that is, cache the results of the P function and only perform the computation when there is no cached value. – vote539 Dec 27 '12 at 19:46 Or you could use a spreadsheet – Henry Dec 27 '12 at 23:26 I think it's the sum of four different geometric distributions. For example, you start off and you haven't rolled any of the numbers. The probability of rolling one of them is 4/12 so you start a geometric distribution with that parameter. Then you suddenly roll one of them. Now you need to roll one of the other three, each roll has a probability of 3/12 of getting one of them, so it's another geometric distribution. By this logic I think T ~ Geo(4/12) + Geo(3/12) + Geo(2/12) + Geo(1/12) which of course is probably some hideous ditribution define in terms of convolutions (or you know, could miraculously be nice and of closed-form, I don't know what.) - This is sort-of what I was thinking, but what are the next steps to get to the point when you can actually compute the probability densities? – vote539 Dec 27 '12 at 10:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447128772735596, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/305559/critiques-on-proof-showing-sqrt12-is-irrational?answertab=active
# Critiques on proof showing $\sqrt{12}$ is irrational. My only exposure to proofs was in a math logic class I took in University. I was wondering if my attempt at proving that $\sqrt{12}$ is irrational is OK. $$\Big(\frac{m}{n}\Big)^2 = 12$$ $$\Big(\frac{m}{2n}\Big)^2 = 3$$ $$m^2=3*(2n)^2$$ This implies $m$ is even and so $n$ must be odd. The problem can be reduced to: $$\Big(\frac{p}{n}\Big)^2 = 3$$ Because $n$ is odd, $p^2$ is odd, so $p$ is odd. This implies: $$4a+1 = 3(4b+1)$$ $$4a - 12b = 2$$ $$2a - 6b = 1$$ I'm kind of stuck at this point. I know that this can't be true but I don't know how to state it. Any critiques or suggestions? Thanks! - 7 "... and so $n$ must be odd." doesn't follow. However, you presumably meant to start the proof with "assume $\sqrt{12}$ is rational, and write it as $m/n$ is lowest terms". If so, then it does follow that $n$ must be odd (because otherwise $m$ and $n$ would have a common factor). What you say after this point seems to come out of thin air. Where did $p$ come from? How can the problem be reduced? Then where do $a$ and $b$ and those equations come from? – Hurkyl Feb 16 at 15:28 I see your point. What I had in mind was m = 2p. For the last equation, if p and n are both odd then there is an integer k such that 4*k+1 = p^2 same can be said for n. – user21154 Feb 16 at 16:24 ## 10 Answers You make it to complicated in my opinion. At first we show that a rational number $\neq 0$ times an irrational is irrational. Prove by contradiction: Let $x\in \mathbb{R}\setminus\mathbb{Q}$, $a,c\in\mathbb{Z}\setminus\{0\}$, $b,d\in \mathbb{N}$. $$x\cdot \frac{a}{b}=\frac{c}{d} \iff x=\frac{bc}{ad},$$ so $x$ would be rational. Use $$\sqrt{12}=\sqrt{4\cdot 3} = \sqrt{4} \cdot \sqrt{3} = 2 \cdot \sqrt{3}$$ So $\sqrt{12}$ irrational $\iff \sqrt{3}$ is irrational. Now we show $3|p^2 \implies 3|p$: We know 3 is prime so with the lemma of euklid we have $$3|p^2 \implies 3|p \wedge 3|p \implies 3|p$$ And for $\sqrt{3}$ irrational you make a contradiction: $$\sqrt{3}=\frac{p}{q}\iff 3q^2=p^2 \implies \exists k \in \mathbb{N}: 3k =p$$ $$q^2=3k^2$$ So $q$ and $p$ are both having the divisor $3$. Now there are two different ways how to use this information, the first one is saying that $p$ and $q$ doesn't have a common divisor, but as you can show they always have the divisor $3$ you can't write $\sqrt{3}$ as a fraction of numbers without common divisors. This is the more elegant way in my opinion. The other way is showing that both $p$ and $q$ can't be finite, because by repeating this arguement we see $3^n|p$ for all $n \in \mathbb{N}$ and the same for $q$. But because of $p,q\in \mathbb{N}$ and $3^n> 1+2n$ and the archimedic principle we get a contradiction. - 1 OK great. Thanks. – user21154 Feb 16 at 16:04 1 You might also want to prove that a rational number times an irrational one is irrational. – PyRulez Feb 16 at 16:36 3 Two points: one needs to justify the claim $\, 3\mid p^2,q^2 \Rightarrow 3 \mid p,q\,$ and one needs to make some initial assumption in order for $\rm\,3\mid p,q\,$ to terminate the proof (e.g. assume that $\,p/q\,$ is in lowest terms, hence $\,p,q\,$ are coprime) – Math Gems Feb 16 at 19:23 2 @Dominic There are many ways to deduce a contradiction from $\,3\mid p,q\,$ but it is crucial to say which way your proof does this if you wish to convince the reader that what you have in mind is correct. Ditto for the justification of the claim that $\,3\mid p^2\Rightarrow 3\mid p.\,$ A rigorous proof leaves the reader no doubts about such. Rigor is especially important here, since many students make errors on these matters, and for millenia there was much confusion about such. – Math Gems Feb 16 at 19:55 Oh I see i am sry i misunterstood you. Thanks, i added the stuff – Dominic Michaelis Feb 16 at 20:01 Just call $\sqrt {12}$ a rational number. What can you say about it? $12$ must be a square of rational number. $12= k^2$ where ($k =\dfrac{p}{q}$) GCD$(p,q)=1$ $12= 2^2 \cdot 3=\dfrac{p^2}{q^2} \implies 3= \dfrac{p^2}{q^2 \cdot 2^2} \implies 3$ is a square. Which implies it has odd number of divisors, but it doesn't. - The following is different in style. It avoids the use of the unique factorization property that arguments about the divisibility by the prime $3$ use. Assume $\sqrt{12}$ is rational. Choose among all equivalent fractions the one with the least positive denominator; let it be $m/n$. Thus $m^2 = 12 n^2$, and we have $$9 n^2 < m^2 < 16 n^2$$ $$3n < m < 4 n$$ $$0 < m -3n < n$$ Now $$\begin{align} \left({12n - 3m \over m - 3n}\right)^2 &= { 9(16n^2 -8mn+m^2)\over m^2-6mn+9n^2}\\ &= { 9(16n^2 - 8mn + 12n^2)\over 12n^2 - 6mn + 9n^2} \\ &= { 36(7n-2m)n\over 3(7n-2m)n} = 12\,, \end{align}$$ which is to say that ${12n - 3m \over m -3n}$ equals $\sqrt{12}$ and has a lesser denominator. This is impossible since we chose $m/n$ to be in least terms. QED In case you're wondering how to find the fraction, it's from the continued fraction expansion of $\sqrt{12}$. One has $$\begin{align} x=\sqrt{a^2+b}&=a+{b \over 2a + \displaystyle{b \over 2a + \displaystyle {b \over 2a + \cdots}}} \\ &= a + {b \over a + x} \end{align}$$ In this case $a=b=3$ and if $x = M/N = m/n$ are two square-roots of $12$, then $${m\over n} = 3 + {3 \over 3 + {M \over N}} = {12 N + 3M \over 3N + M}$$ Now solve the system $$m = 12N + 3M, \quad n = 3N + M$$ for $M$, $N$, and get the new fraction in terms of $m$, $n$: $${M \over N} = {12n - 3m \over m -3n}$$ You then have to check that it's still a square-root of $12$ and the denominator is positive and has decreased. The procedure works in general as long as $a^2+b$ is not a square (and $a$ and $b$ are positive). - Wait: I found something strange: in your last 2 line, before QED, the fraction ought to be $\frac{9(28n^2-8mn)}{3(7n^2-3mn)}$, not as you claimed. Per chance I missed something? Regards. – awllower Apr 7 at 9:45 Thanks! There was a typo earlier, too. Fixed now. – Michael E2 Apr 7 at 11:49 Thanks for this great answer thus. – awllower Apr 9 at 0:33 Yet another simple way to show this! The question is equivalent with the irrationality of $\sqrt3$, which is the same as showing that there is no rational solution to $x^2-3=0$. By Eisenstein's criterion, the polynomial is indeed irreducible over $\mathbb Q$. So this finishes the proof. - I'll assume you know how to show that $\sqrt 2$ is irrational, in the same way you can show that for any prime $p$ then $\sqrt p$ is irrational, and we know that if $a$ is irrational and $b\ne 0$ is rational then $ab$ is irrational then we have$$\sqrt12=2\sqrt3$$now $2\ne 0$ is rational and 3 is prime which implies that $\sqrt3$ is irrational thus we have $\sqrt12=2\sqrt3$ is irrational. - Another angle on this is to prove that no integer is the square of a ratio. All squares are the squares of integers. Then, since $12$ does not have an integer square root, its square root cannot be rational, either. To show that no integer is the square of a ratio, suppose $(\frac{n}{m})^2 = k$ where $m, n$ and $k$ are integers, $n/m$ is in lowest terms, $m\neq 1$, and all are integers. But that situation is impossible. $$(\frac{n}{m})^2 = k$$ $$\frac{n\cdot n}{m\cdot m} = k$$ If $n/m$ is in lowest terms, as we assumed, that means that $m$ does not divide $n$. This implies that $m$ and $n$ have completely distinct prime factors. Which implies that no multiple of $m$ divides any multiple of $n$, because multiples of a number are just combinations of its prime factors. Thus $\frac{n\cdot n}{m\cdot m}$ cannot be an integer, unless $m = 1$ which we ruled out. Therefore, $\sqrt 12$ cannot be a ratio. It must either be an integer ($12$ is a square), or else irrational. Our goal is therefore to show that $12$ isn't a square. Observe that $12$ factors into $2\cdot 2\cdot 3$. A square has prime factors that are all of even duplicity, so that these factors can be divided into two identical groups. There is no way to separate the factors $2\cdot 2\cdot 3$ into two identical groups because $3$ occurs only once. (By contrast, consider $36 = 2\cdot 2\cdot 3\cdot 3$ whose factors are each of duplicity 2, and so can be split into two groups $(2\cdot 3)(2\cdot 3) = 6\cdot 6$.) Since $\sqrt 12$ isn't an integer, and no integer has a square root which is a ratio, $\sqrt 12$ must be irrational. - If you’re willing to use the Fundamental Theorem of Arithmetic, which says that the decomposition of any nonzero integer as a product of primes is unique, then this proof, and all others for irrationality of $r$-th roots, drops right out. Write $m^2=12n^2$. This contradicts FTA because there are evenly many $3$’s on the left but oddly many on the right. - If you know that $\sqrt{3}$ is irrational then we have easier method as follows: If $\sqrt{12}$ want to be rational so it should be at form $\frac{m}{n}$ but we know $\sqrt{12}=\sqrt{2^{2}.3}=2\sqrt{3}$ so $\sqrt{3}=\frac{m}{2n}$ and should be rational too which is contradiction. So $\sqrt{12}$ can not be rational. And if you don't know $\sqrt{3}$ is irrational you can prove it as usual way that is described by others and then use this method to conclude $\sqrt{12}$ is irrational according to $\sqrt{3}$ is irrational. - To do it directly ignore the prime 2 altogether, and go for the prime 3 which appears to an odd power in the equation $$m^2=12n^2$$ (assume lowest terms) The right hand side is divisible by 3, so the left hand side must be divisible by 3, so we must have $m=3r$. Our equation becomes $$9r^2=12n^2 \text{ or }3r^2=4n^2$$ Now we see similarly that $n$ must be divisible by 3, contradicting our lowest terms assumption. - You are using "$3 \mid m^2 \implies 3 \mid m$". At this level, many instructors would want to see justification for this. – Pete L. Clark Feb 16 at 19:47 @PeteL.Clark Isn't it trivial though? – Thomas Feb 16 at 19:49 @Thomas: No, it isn't. If you know Euclid's Lemma or the Fundamental Theorem of Arithmetic, it is easy. But neither of these are trivial results and in fact most first courses in proof techniques do not establish them. A lower level proof uses the contrapositive and the division algorithm. – Pete L. Clark Feb 16 at 19:53 @PeteL.Clark I see, thanks. – Thomas Feb 16 at 19:57 You could have done this way too. \ Assume $\displaystyle\frac{m}{n}$ is written in its simplest form. Then $m^2=2(6n^2)\Rightarrow m$ is even. Substituting $m=2k\Rightarrow n$ is even. Thus both $m$ and $n$ have a common factor of 2 contradicting you statement that $\displaystyle\frac{m}{n}$ is not in its simplest form. - 9 How did you deduce "$m=2k \Rightarrow n$ is even"? – P.. Feb 16 at 15:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 131, "mathjax_display_tex": 25, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9612683057785034, "perplexity_flag": "head"}
http://mathoverflow.net/questions/87919?sort=newest
## Difficulties with the mod 2 Moore Spectrum ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have been informed that there is a reference out there which specifically details what goes wrong with the mod 2 Moore spectrum, i.e. why it is not $A_\infty$ or something? I do not know the details, but I am interested in this from the point of view of trying to understand how to come up with the correct notion of ideals of spectra (in the sense of Smith and others), especially the ideal generated by multiplication by 2 on the sphere spectrum. I think it is a paper by Neeman. Does anyone know of this paper, or of other papers which might detail this situation carefully? Thanks! PS Is this question appropriate, as it is only a reference request, in the strongest sense of the phrase? - ## 2 Answers The actual statement is much stronger than you suggest, namely: the mod 2 Moore spectrum does not admit a unital multiplication (even if it is non-associative). I don't know a reference so I'll sketch the proof: Let $R$ be a spectrum with unital product, with unit map $\eta\colon S^0\to R$ and product map $\mu: R\wedge R\to R$. Then it is straightforward to show that if $n\eta=0$ in $\pi_0R$ for some integer $n$, then $n\cdot\mathrm{id}_R: R\to R$ is homotopic to the null map as well. (The key point is that proving this uses the existence of $\mu:R\wedge R\to R$ such that $\mu\circ (\eta\wedge \mathrm{id}_R) = \mathrm{id}_R$, but nothing about associativity of such $\mu$.) If $R$ is the mod $2$ Moore spectrum, with $\eta: S^0\to R$ the generator of $\pi_0R$, then you calculate that: • $\pi_0R = Z/2$, but • $\pi_2R = Z/4$, from which it follows that • $2\eta=0$, • $2\mathrm{id}_R\neq 0$. Therefore no such unital multiplication $\mu$ on $R$ can exist. - Thankyou, this is a very clear explanation! – Jon Beardsley Feb 8 2012 at 19:37 9 Yes, very nice. An alternative reason is that for the mod $2$ Moore spectrum any map $R\wedge R\to R$ must induce the zero map $\pi_0(R)\otimes \pi_0(R)\to\pi_0(R)=Z/2$ because $\pi_0(R\wedge R)=Z/4$. – Tom Goodwillie Feb 9 2012 at 2:19 3 Tom, I think your proof is better. – Charles Rezk Feb 9 2012 at 6:35 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. An argument from the old days. A unital multiplication gives a non-trivial splitting of $R\wedge R$ whereas its mod 2 cohomology is indecomposable as a module over the steenrod algenbra. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276134967803955, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-equations/170754-second-order-non-homogeneous-differential-equation.html
# Thread: 1. ## Second order non homogeneous differential equation I have to solve this DE ((x+2)^2)y'' -2y=4 the hint says to get rid of the x+2 but I have no idea where to start from... do I have to replace x+2 with another variable or what? 2. I would replace the $x+2$ with $t=x+2,$ and you get $\dfrac{dy}{dt}=\dfrac{dy}{dx}\dfrac{dx}{dt}=\dfrac {dy}{dx}.$ Hence, the DE becomes $t^{2}\,\dfrac{d^{2}y}{dt^{2}}-2y=4.$ This equation is a Cauchy-Euler equation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9197782874107361, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/193351/markov-chain-reach-one-state-before-another?answertab=votes
# Markov Chain Reach One State Before Another I'm stumped on a problem. Here's my transition matrix: $$P = \begin{bmatrix} \frac{3}{4}&\frac{1}{4}&0&0&0&0&0 \\\\ \frac{1}{2}&\frac{1}{4}&\frac{1}{4}&0&0&0&0 \\\\ \frac{1}{4}&\frac{1}{4}&\frac{1}{4}&\frac{1}{4}&0&0&0 \\\\ \frac{1}{4}&0&\frac{1}{4}&\frac{1}{4}&\frac{1}{4}&0&0 \\\\ \frac{1}{4}&0&0&\frac{1}{4}&\frac{1}{4}&\frac{1}{4}&0 \\\\ \frac{1}{4}&0&0&0&\frac{1}{4}&\frac{1}{4}&\frac{1}{4} \\\\ \frac{1}{4}&0&0&0&0&\frac{1}{4}&\frac{1}{2}\end{bmatrix}$$ where the state space is {0, 1, 2, 3, 4, 5, 6}. I actually have two questions, one of which I've already answered and the other I cannot figure out. Here's the one I've already answered: Suppose the chain has been running for a long time and we start watching the chain. What is the probability that the next three states will be $4$, $5$, $0$ in that order? For this problem, I found the invariant probability vector $\pi$. I then computed $$\pi(4) * p(4, 5) * p(5, 0) = .013 * \frac{1}{4} * \frac{1}{4} = .0008125.$$ Did I take the right approach to this question? If not, I would appreciate being pointed in the right direction. The problem I cannot figure out reads: Suppose the chain starts in state $1$. What is the probability that it reaches state $6$ before reaching state $0$? I get the feeling that I will have to compute the $Q$ matrix treating $0$ as an absorbing state, but beyond that, I can't think of what I would do. Any advice? - ## 2 Answers This is exercise 1.10 in the second edition of Introduction to Stochastic Processes by Gregory Lawler. To solve it, use the method described starting in the middle of page 29. You need to consider both states $0$ and $6$ as absorbing, and use both the $Q$-matrix of transition probabilities from $\{1,2,3,4,5\}$ to itself, and the $S$ matrix of transition probabilities from $\{1,2,3,4,5\}$ to $\{0,6\}$. That is, we have $$Q=\pmatrix{1/4&1/4&0&0&0\cr 1/4&1/4&1/4&0&0\cr 0&1/4&1/4&1/4&0\cr 0&0&1/4&1/4&1/4\cr 0&0&0&1/4&1/4\cr}$$ and $$S=\pmatrix{1/2&0\cr 1/4&0\cr 1/4&0\cr 1/4&0\cr 1/4&1/4\cr}.$$ The probability you want is the $(1,6)$ entry in the matrix $$(I-Q)^{-1}S=\pmatrix{143/144&1/144\cr 47/48&1/48\cr 17/18&1/18\cr 41/48&7/48\cr 89/144&55/144}.$$ The columns of this matrix are labelled $0,6$ while the rows are labelled $1,2,3,4,5$. The required probability is $1/144$. - Thanks, Byron. That's exactly what I was looking for. Incidentally I was able to figure it out when I needed it, but thanks anyway! – Jack Radcliffe Sep 24 '12 at 23:38 For the first question, I feel like what it's asking is slightly more complicated. What it seems to be asking is, suppose I let this chain run for a while, and then I observe it to be in a state $x$. What is the chance that immediately after this, the chain consecutively visits 4,5 and then 0, in that order. So I would condition on seeing the chain in state $x$, and then sum over the state space. In other words, $\sum_{x=0}^6\pi(x)*p(x,4)p(4,5)p(5,0)$ This should simplify nicely and you might find a surprise in the answer. For the second question, let $f(x)$ be the probability a markov chain starting at $x$, hit 6 before 0. In other words, you are interested in $f(0)$. We can come up with the following set of equations: $\forall x\neq 0,6: \ f(x)=p(x,6)+(1-p(x,6)-p(x,0))\sum_{z\neq \{0,6\}}f(z)$ $f(0)=0$ $f(6)=1$ which gives you a system of six equations for six unknowns (well, two are immediate). Solving this will give you what you desire. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9641737937927246, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/13301/assuming-u-sim-n0-sigma2-when-y-is-highly-skewed
Assuming $u\sim N(0,\sigma^2)$ when y is highly skewed does it make sense to assume $u\sim N(0,\sigma^2)$ when I know from a histogram that $y$ is highly skewed. Because from the assumption $u\sim N(0,\sigma^2)$ it follows that $y\sim N(x\beta,\sigma^2)$ and I'm absoluteley not sure if the assumption $u\sim N(0,\sigma^2)$ makes sense in a case where I know that the distribution of $y$ is not bell shaped. The alternative would be just to make OLS without any assumption about the error term, but in this case I can't analyze outliers and leverages (what I'd really like to do, because otherwise I can't present much more than a line which minimizes the sqaured sum of the residuals). [addendum: I can't make an outlier analysis because I can not define "outlier" in a context where I don't assume a normal distribution, because there is no outlying without a distribution] Besides your answers I'd really like to have a recommendation for a good book, where I can find some thoughts about what assumptions should we make when y is obviously not normal distributed. Regards - – wok Jul 20 '11 at 15:41 1 You need to put some more background into this. In the linear model $y_i = \alpha + \beta x_i + \epsilon_i$, it is not unusual to assume the $\epsilon_i$ are iid with a normal distribution of mean $0$, but there is no requirement for the $x_i$ or $y_i$ to be normally distributed. – Henry Jul 20 '11 at 15:41 Well, just because $y|x \sim N(x \beta, \sigma^2)$ doesn't mean that a histogram of the marginal distribution of $y$ will look bell shaped; I believe that will only happen if $x$ is also normally distributed. – Macro Jul 20 '11 at 16:37 4 @Henry For background you could read the almost identical series of questions here, here, and here. "What we've got here is failure to communicate." – whuber♦ Jul 20 '11 at 17:17 1 @Mark Pick any $\beta$ and $\sigma$, generate 10 iid draws $\epsilon_1$, ..., $\epsilon_{10}$ from a normal(0, $\sigma$) distribution, and create the dataset $((2^i, \beta 2^i + \epsilon_i), i=1,\ldots,10)$. When $|\beta|$ and $\sigma$ are near $1$, the y's will be highly positively skewed but the data are perfectly linear with beautifully normal errors. In short, the skewness of the y's comes from the skewness of the x's but (of itself) reveals nothing at all about the distribution of the residuals. You check distributional assumptions by studying the residuals, not the y's. – whuber♦ Jul 20 '11 at 19:44 show 4 more comments 1 Answer If u is the residual from a regression of y on some other variable x, then I think this is a variant of an earlier question. The residuals of u can be Gaussian even if the distribution of y is highly skewed as it may simply be that the distribution of x is highly skewed. Consider an example of estimating temperature (y) as a function of lattitude (x); here u represents the measurment error of the thermometer (and is Gaussian). The distribution of y values in our sample will depend on where we choose to site out weather stations. If we place them all either at the poles or the equator, then we will have a bimodal distribution. If we place them on a regular equal area grid, we will get a unimodal distribution of y values, even though the physics of climate is the same for both samples and the measurement uncertainty u is normal. - Thanks for the answer! Do you have some good references in this context for me? A good book or a paper. – MarkDollar Jul 21 '11 at 5:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406204223632812, "perplexity_flag": "head"}
http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42514
## Awfully sophisticated proof for simple facts [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is sometimes the case that one can produce proofs of simple facts that are of disproportionate sophistication which, however, do not involve any circularity. For example, (I think) I gave an example in this M.SE answer (the title of this question comes from Pete's comment there) If I recall correctly, another example is proving Wedderburn's theorem on the commutativity of finite division rings by computing the Brauer group of their centers. Do you know of other examples of nuking mosquitos like this? - 29 I once saw someone proving resolutions of singularities of curves by quoting Hironaka's theorem. – Richard Borcherds Oct 17 2010 at 15:23 15 rjlipton.wordpress.com/2010/03/31/april-fool – Steve Huntsman Oct 17 2010 at 15:42 20 Brauer groups and cohomology are certainly overkill for Wedderburn's theorem: if $D$ is a finite division algebra and $L$ is a maximal subfield, then the Noether-Skolem theorem shows that the multiplicative group of $D$ is a union of conjugates of that of $L$; hence $D$=$L$. – JS Milne Oct 17 2010 at 20:07 8 @Maxime: I have trouble believing that such a proof is actually non-circular. Surely such proofs form a step, however easy, in the classification. – Qiaochu Yuan Oct 17 2010 at 21:59 15 I once convinced myself the Cantor set is non empty because it is a descending intersection of non empty closed subsets of a compact set, before noticing it contains 0. – roy smith Jan 29 2011 at 6:48 show 18 more comments ## 69 Answers Irrationality of $2^{1/n}$ for $n\geq 3$: if $2^{1/n}=p/q$ then $p^n = q^n+q^n$, contradicting Fermat's Last Theorem. Unfortunately FLT is not strong enough to prove $\sqrt{2}$ irrational. I've forgotten who this one is due to, but it made me laugh. EDIT: Steve Huntsman's link credits it to W. H. Schultz. - 16 LoL ! – Qfwfq Oct 17 2010 at 16:47 28 Yes, Fermat's Last Theorem is an important generalization of the fact that $2^{1/n}$ is irrational. :-) – Greg Kuperberg Oct 17 2010 at 20:42 6 it's not clear to me that FLT does not use Gauss' lemma on factoring integer polynomials. – some guy on the street Oct 17 2010 at 20:50 66 This argument is essentially circular. Indeed, we can assume $n$ is prime (just like for FLT) and then the proof of FLT first passes from a hypothetical nontrivial solution to $a^n + b^n = c^n$ for prime $n > 2$ to a suitable "Frey curve" $y^2 = x(x-a^n)(x+b^n)$ where one has to rig certain congruential and gcd conditions on $(a,b,c)$, including that $a$, $b$, and $c$ are pairwise coprime. Yet that step applied to $(p,q,q)$ is exactly what would be the "Euclid-style" proof that $2$ is not a rational $n$th power. Hmm, another disguised version of a Euclid proof. Like the Furstenberg thing...:) – BCnrd Oct 18 2010 at 4:25 36 A student in the BeNeLux olympiad apparently proved that 56 is not a cube by observing that 56 = 4^3 - 2^3 and referring to Fermat's Last Theorem for the exponent 3. – Franz Lemmermeyer Jun 15 2011 at 15:07 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. An example that came up in my measure theory class today: The harmonic series $\sum_{n=1}^\infty \frac{1}{n}$ diverges, because otherwise the functions $f_n := \frac{1}{n} 1_{[0,n]}$ would be dominated by an absolutely integrable function. But $$\int_{\bf R} \lim_{n \to \infty} f_n(x)\ dx = 0 \neq 1 = \lim_{n \to \infty} \int_{\bf R} f_n(x)\ dx,$$ contradicting the dominated convergence theorem. - 10 I love this one. – Mariano Suárez-Alvarez Nov 4 2010 at 3:17 3 Isn't this the standard proof? – Harry Gindi Dec 9 2010 at 4:58 18 @Harry, no it isn't: this depends on knowing the dominated convergence theorem (which very few people prove for the Riemann integral, so usually has to wait until you are studying measure theory) The divergence of the harmonic series follows from the integral comparison thorem, for example, a much more elementary proof! – Mariano Suárez-Alvarez Dec 9 2010 at 13:13 27 The standard proof is $\sum_{n=2^i+1}^{2^{i+1}} n^{-1} \geq \sum_{n=2^i+1}^{2^{i+1}} 2^{-(i+1)} = \frac 12$. – Kevin O'Bryant Jun 19 2011 at 22:26 3 I remember someone had an article giving 20 different proofs of this fact. – John Jiang Nov 28 2011 at 9:42 show 3 more comments There are infinitely many primes because $\zeta(3)=\prod_p \frac{1}{1-p^{-3}}$ is irrational. - 6 Or since $\frac12+\frac13+\frac15+\frac17+\ldots$ diverges (by Bertrand's postulate). – J.C. Ottem Oct 17 2010 at 15:47 10 @J.C. Ottem, How does Bertrand's postulate give us divergence? – Andres Caicedo Oct 17 2010 at 16:51 14 Even better, there are infinitely many primes because there are arbitrarily long arithmetic progressions in them (the Green-Tao theorem). – Mark Schwarzmann Oct 17 2010 at 19:39 12 @Mark: surely the proof of the Green-Tao theorem uses at some point the infinitude of the primes... – Qiaochu Yuan Oct 17 2010 at 22:04 11 @Qiaochu,Mark: It does (they need to embed [1,N] in $Z_p$ for some prime bigger than N to get a nice field structure for some arguments to work). – Thomas Bloom Oct 18 2010 at 5:06 show 6 more comments Here's a topological proof that $\mathbb{Z}$ is a PID. Let $p,q$ be relatively prime. Then the line from the origin to the point $(p,q)\in\mathbb{R}^2$ does not pass through any lattice point, and therefore defines a simple closed curve in the torus $\mathbb{T}=\mathbb{R}^2/\mathbb{Z}^2$. Cut the torus along this curve. By classification of surfaces, the resulting surface is a cylinder. Therefore, we can reglue it to get a torus, but where our simple closed curve is now a "stupid" such thing, i.e., a ring around the torus. Which is all to say that in this case, there exists an automorphism of the torus which takes $(p,q)\in\mathbb{Z}^2=\pi_1(\mathbb{T})$ to $(1,0)$. But this gives a matrix `$\begin{bmatrix} p & x \\ q & y \end{bmatrix}\in GL_2(\mathbb{Z})$`, so $py-qx\in\mathbb{Z}^{\times}$, i.e., $py-qx=\pm 1$. The only two things this proof needs are the computation of the homology of a torus and the classification of surfaces, neither of which actually relies on $\mathbb{Z}$ being a PID! - 1 Strictly speaking this is a bit weaker than Z being a PID, but wow. – Harry Altman May 5 2011 at 20:53 17 I think that in the brain of many low-dimensional topologists, rational numbers, Euclid's algorithm and SL(2,Z) are really instantaneously replaced by topological data on the torus (say homotopy classes of essential curves, Dehn twists and mapping class groups). – Maxime Bourrigan May 6 2011 at 0:32 3 I recently discovered that this is exactly how I think about this when I found myself very gingerly giving the algebraic argument in class (which graduate students find obvious) and then cavalierly dispensing the topological argument as the trivial one. – Richard Kent Jan 1 2012 at 15:57 1 wow that's cool. are there analogous arguments for the other euclidean domains? – Vivek Shende Dec 20 at 13:08 The number of real functions is $c^c=2^c$ which is bigger than $c$ by Cantor's theorem ($c$ is cardinality continuum). The number of real continuous functions is at most $c^{\aleph_0}=c$ as they can be recovered from restrictions to ${\bf Q}$, and there are $c^{\aleph_0}$ many functions ${\bf Q}\to {\bf R}$. This argument, which requires several minor steps in an introductory set theory class, eventually shows that there exists a discontinuous real function. - 6 Same reasoning: there exists a non-Borel real function. Or there exists a non-Borel set of reals. Now it's not so extreme-seeming. – Gerald Edgar Oct 17 2010 at 20:40 7 Also noteworthy is that this is a constructive proof of the existence of a discontinuous real function. – Tsuyoshi Ito Oct 18 2010 at 3:55 3 @Tsuyoshi: I don't follow. As I understand the term, a constructive proof of the existence of a discontinuous real function would be something like "Consider the characteristic function of the origin. Notice that it is discontinuous at zero because..." Komjath's answer is an exemplar of a nonconstructive proof. – Pete L. Clark Oct 18 2010 at 11:23 9 @Pete: Of course, it is an extremely simple fact that there is a constructive example of a discontinuous real function. This proof is an awfully sophisticated proof of it (because Cantor’s theorem can be proved constructively, if I am not mistaken). But now I know that my joke fell flat…. – Tsuyoshi Ito Oct 18 2010 at 22:41 6 While this is obviously overkill, the general technique is so useful that perhaps students should see this proof -- when cardinality is introduced, it's not immediately obvious just how useful it is. For example, even before saying what a computer program is (but knowing that they are specified by strings), one can deduce that there are uncomputable sets, and similarly non-regular languages, etc. I'd say the general idea is that we often have countably many descriptions (programs, grammars, restrictions to $\mathbf{Q}$) but uncountably many objects, so most objects cannot be described. – Max Oct 14 2011 at 11:22 show 1 more comment D J Lewis, Diophantine equations: $p$-adic methods, in W J LeVeque, ed., Studies In Number Theory, 25-75, published by the MAA in 1969, stated on page 26, "The equation $x^3-117y^3=5$ is known to have at most 18 integral solutions but the exact number is unknown." No proof or reference is given. R Finkelstein and H London, On D. J. Lewis's equation $x^3+117y^3=5$, Canad Math Bull 14 (1971) 111, prove the equation has no integral solutions, using ${\bf Q}(\root3\of{117})$. Then Valeriu St. Udrescu, On D. J. Lewis's equation $x^3+117y^3=5$, Rev Roumaine Math Pures Appl 18 (1973) 473, pointed out that the equation reduces, modulo 9, to $x^3\equiv5\pmod9$, which has no solution. I suspect Lewis was the victim of a typo, and some other equation was meant, but Finkelstein and London appear to have given an inadvertently sophisticated proof for a simple fact. - 1 The minus sign is inconsequential,just change $y$ to $-y$. – Gerry Myerson Jun 6 2011 at 7:05 show 1 more comment In his 1962 article "A unique decomposition theorem for 3-manifolds", Milnor is actually interested in the unicity of a prime decomposition. For the existence, the method is very natural: if you find an irreducible sphere, you cut the manifold along it and obtain a decomposition $M = M_1 \sharp M_2$, and you do it again with each factor, and so on. Of course, the hard part is now to prove that this process terminates after a finite number of steps. For that, Milnor refers to Kneser but remarks that "if one assumes the Poincaré hypothesis then there is a much easier proof. Define $\rho(M)$ as the smallest number of generators for the fundamental group of M. It follows from the Gruško-Neumann theorem that $\rho(M_1\sharp M_2) = \rho(M_1) + \rho(M_2)$. Hence if $M\simeq M_1 \sharp \cdots \sharp M_k$ with $k > \rho(M)$ then some $M_i$ must satisfy $\rho(M_i)=0$, and hence must be isomorphic to $S^3$." A nice follow-up of this proof/joke is that Perel'man's proof of Poincaré's conjecture doesn't even use Kneser-Milnor decomposition and this argument is therefore valid. - 3 This sounds to me somewhat similar to assertions of the form: "if the generalized Riemann hypothesis is assumed, then a relatively easy proof of such-and-such fact may be given as follows." The emphasis is less on the length of a complete proof than on what one believes is true, and proceeding from there -- sort of like saying, "morally, this should hold because...". – Todd Trimble May 5 2011 at 19:29 The fundamental group of the circle is $\mathbb{Z}$ because: It is a topological group, so its fundamental group is Abelian by the Eckmann-Hilton argument. Thus its fundamental group and first singular homology group coincide by the Hurewicz theorem. Since singular homology is the same as simplicial homology, I can just do the one line of computation to obtain the result. - 4 Yes, but the fundamental group of the circle is so basic that it is usually the first thing computed in any algebraic topology course or book. Isn't using Hurewicz and the equivalence of singular and simplicial homology a nuke for this problem? If not, I do not see why any of the other answers work – Steven Gubkin Oct 18 2010 at 19:34 4 "The fundamental group of the circle is $\mathbb Z$" In my opinion this is one of the most consequential theorems in all of mathematics. So I don't mind if people suggest proofs coming from very different sources. – Christian Blatter Oct 18 2010 at 19:40 3 Somehow this proof doesn't feel like a pointless nuke to me, rather it does explain from some perspective what's going on. The usual proof, i.e. proving the path-lifting property for the covering $\mathbb{R} \to S^1$, is also a bit of work, and breaking up loops into paths on $\mathbb{R}$ feels unsatisfying. It also seems odd to never mention that $[s \mapsto e^{2 \pi n s}] * [s \mapsto e^{2 \pi m s] = [s \mapsto e^{2 \pi (m+n) s}]$ has something to do with a group structure on $S^1$... – ABayer Oct 18 2010 at 21:38 29 I'm surprised people don't like this. It's not the most extreme example here, but imagine this scenario: you see a colleague in the hall and ask what he's teaching today. "I'm introducing the fundamental group." And you ask if he'll compute `$\pi_1(S1)$`. "Well, not today. I'll define the fundamental group, but before I can compute π1(S1)' I'll have to set up singular cohomology (long exact sequences, excision, all that) and then once I've explained simplicial homology we can get back to `$\pi_1(S1)$`. It'll be a month or two." I bet you'd worry about your colleague's sanity. – Dan Ramras Oct 23 2010 at 13:52 2 On the other hand, it's perfectly natural to use the Eckmann-Hilton argument in order to show that the degree map is a homomorphism. I don't know a reference that uses E-H here, but that's how I did it in class this semester. – Dan Ramras Oct 23 2010 at 13:57 show 7 more comments Another example from Math Underflow: We can prove Fermats Last Theorem for $n=3$ by a simple application of Nagell-Lutz (to compute the torsion subgroup) then Mordells Theorem (to see that the group must be $\mathbf{Z}^r \times \mathbf{Z}/3\mathbf{Z}$) then to finish Gross-Zagier-Kolyvagin theorem (which gives $r = 0$) - and that shows it has no nontrivial solutions. I beleive a similar approach works for $n=4$. - 19 1+ for "Math Underflow" (and your answer). – Martin Brandenburg Oct 17 2010 at 16:32 31 This is actually a nice answer, because it treats $x^3 + y^3 = z^3$ like what it is -- a rational elliptic curve -- and proceeds to find all rational points on it in the way which is easiest given the current level of technology. – Pete L. Clark Oct 17 2010 at 18:03 4 @adrian But isn't the Jacobian of the Fermat quartic isogenous to a product of 3 elliptic curves, each of analytic rank 0? – paul Monsky Oct 18 2010 at 4:41 21 @Pete: Nice point. It's like arguing that sending email is frivolous overkill since a carrier pigeon could do the same job with much less technology. – Cam McLeman Oct 18 2010 at 16:39 2 If such a proof works for n = 4, then it's a better answer for this question than the n = 3 one, because the simplest proof for n = 4 is much simpler than the simplest proof for n = 3. – Zsbán Ambrus Mar 4 2012 at 12:37 show 2 more comments Seen on http://legauss.blogspot.com.es/2012/05/para-rir-ou-para-chorar-parte-13.html Theorem: $5!/2$ is even. Proof: $5!/2$ is the order of the group $A_5$. It is known that $A_5$ is a non-abelian simple group. Therefore $A_5$ is not solvable. But the Feit-Thompson Theorem asserts that every finite group with odd cardinal is solvable, so $5!/2$ must be an even number. - 1 +1 - I love it! – Todd Trimble Dec 30 at 2:26 In the recent paper by Ono and Bruinier (it's currently on the AIM web site) "An algebraic formula for the partition function" they use their formula to determine the number of partitions of 1. This calculation involves CM points, evaluating a certain weak Maass form at these points, the Hilbert class field of $\mathbb{Q}(\sqrt{-23})$, ... etc. - 3 Now I have to wonder if the formula is valid for 0... – Harry Altman Jan 29 2011 at 5:43 show 1 more comment • There is no largest natural number. The reason is that by Cantor's theorem, the power set of a finite set is a strictly larger set, and one can prove inductively that the power set of a finite set is still finite. • All numbers of the form $2^n$ for natural numbers $n\geq 1$ are even. The reason is that the power set of an $n$-element set has size $2^n$, proved by induction, and this is a Boolean algebra, which can be decomposed into complementary pairs $\{a,\neg a\}$. So it is a multiple of $2$. • Every finite set can be well-ordered. This follows by the Axiom of Choice via the Well-ordering Principle, which asserts that every set can be well-ordered. • Every non-empty set $A$ has at least one element. The reason is that if $A$ is nonempty, then $\{A\}$ is a family of nonempty sets, and so by the Axiom of Choice it admits a choice function $f$, which selects an element $f(A)\in A$. - 1 The absurdity of the examples, to my way of thinking, is the idea that one should appeal to a big axiom such as AC to prove the completely trivial facts that every finite set has a well-order or that nonempty sets have members. Of course, AC is completely unnecessary here, and so this is using a nuclear weapon to kill fleas. – Joel David Hamkins Oct 20 2010 at 16:28 21 Well, what one could do here is first prove the claim using AC, and then use Godel's theorem that every statement in Peano Arithmetic that is provable in ZFC can be proven in ZF, which I think is very much in the "using a nuclear weapon to kill fleas" spirit of this exercise. – Terry Tao Nov 3 2010 at 22:17 1 @andres: "finite" does not have to use ordinals. A set $M$ is Tarski-finite iff every nonempty subset of $P(M)$ has a maximal element with respect to inclusion. Tarski-finite is equivalent to the usual notion of finite (in a weak version of ZF). – Goldstern Feb 21 2012 at 20:07 1 Zsban, the usual definition of $\omega$ is that it is the least inductive set (containing $0$ and closed under successor $x\mapsto x\cup\{x\}$). The concept of finite is defined after this, since a set is finite if it is bijective with a proper initial segment of $\omega$. – Joel David Hamkins Mar 5 2012 at 0:56 show 5 more comments Proposition. Let $f$ be a bounded measurable function on $[0,1]$. Then there is a sequence of $C^\infty$ functions which converges to $f$ almost everywhere. Proof (by flyswatter). Take the convolution of $f$ with a sequence of standard mollifiers. Proof (by nuke). By Carleson's theorem the Fourier series of $f$ is such a sequence. - Using character theory, any group of order 4 is abelian since the only way to write 4 as a sum of squares is 4 = 1^2 + 1^2 + 1^2 + 1^2. - 15 Well, any way that includes the required one copy of 1^2. Otherwise 2^2 would be a possibility... – Harry Altman Jan 29 2011 at 4:26 5 I guess, you can make it more striking: "Using character theory, since any group of order 4 is abelian hence the only way to write 3 as a sum of squares is 3 =1^2 + 1^2+ 1^2" Right? – Ostap Chervak May 17 2012 at 8:53 1 Well, 4 is already the sum of one square. To complete the proof, one should say "the only way to write 4 as a sum of squares, one of which is 1, coming from the trivial representation, is..." – Todd Trimble Aug 20 at 1:10 2 When I first learned character theory, I asked if one could prove the four-square theorem by exhibiting a group of every order with exactly four conjugacy classes. While it became obvious a second later that this was only a fantasy, I'm excited that someone else thought of the relation between character theory and the four-square theorem! – David Corwin Dec 21 at 18:33 show 1 more comment No finite field $\mathbb{F}_q$ is algebraically closed: Let $k$ be an algebraically closed field. Then every element of $GL_2(k)$ has an eigenvector, and hence is similar to an upper triangular matrix. Therefore $GL_2(k)$ is the union of the conjugates of its proper subgroup $T$ of upper triangular matrices. No finite group is the union of the conjugates of a proper subgroup, so $GL_2(k)$ is not finite. Hence $k$ is not finite either. - 3 That's actually a pretty simple argument. – Agol Aug 20 at 21:49 show 1 more comment If two elements in a poset have the same lower bounds then they are equal by Yoneda lemma. (I actually said this in a seminar two weeks ago, and of course I explained I killed a mosquito with a nuke.) - 15 I don't think this is overkill; it is actually how I think about this result. – Qiaochu Yuan Oct 17 2010 at 22:01 25 But surely normal human beings (i.e., those who have never applied Yoneda lemma of their own free will) prefer to hear "therefore they are lower bounds of each other, hence equal"? – Andrej Bauer Oct 18 2010 at 6:02 8 Well, some of us have applied the Yoneda lemma and prefer the non-Yoneda argument here, too! :) – Mariano Suárez-Alvarez Oct 18 2010 at 17:52 12 The widely used "non-Yoneda" arguments are essentially repetitions of the Yoneda argument in special cases. – Martin Brandenburg Dec 28 2010 at 9:56 1 @Martin Brandenburg: +1 can't stop laughing. – Vectornaut Aug 20 at 16:54 - 14 A nice quote from the book: “Little minds love to ask big questions, or what appears to them as big questions; never stopping to reflect how trivial the answer must be, if only the questioner would take the trouble to think it through. Sometimes it is necessary for the writer of such a serious work as the present one to call a halt in the consideration of matters of real weight and interest and to remember how weak and frail are the reasoning powers of his lowly readers.” – Harald Hanche-Olsen Oct 17 2010 at 18:55 1 I wonder if this wonderful book will be reprinted. – paul Monsky Oct 18 2010 at 7:36 3 Ah, this book is wonderful... In the wake of this post, someone recalled the copy I had out from the library. :) – Gwyn Whieldon Oct 18 2010 at 18:10 9 This book is fantastic. From the bottom of page 47: "As an axiom on which to base the positive numbers and the integers, which have in the past produced much harmless amusement and are still widely accepted as useful by most mathematicians, some such proposition as the following is sometimes considered as being pleasant, elegant, or at least handy: AXIOM: Equalisers exist in the category of categories." – Qiaochu Yuan Oct 19 2010 at 12:54 3 Peter Johnstone’s wonderful review of Paul Taylor’s Practical Foundations of Mathematics is worth reading in this connection: cs.man.ac.uk/~pt/Practical_Foundations/… “Nearly 30 years later, Paul Taylor has finally written the book of which Mathematics Made Difficult was a parody. That is not intended as a criticism of Practical Foundations of Mathematics...” – Peter LeFanu Lumsdaine Oct 21 2010 at 4:28 show 4 more comments There is a Fourier analytic proof for Sperner's theorem which is much more complicated than the combinatorial proof (and give less in certain respects). This was part pf the polymath1 project. A general point is that sometime trying to prove a Theorem X using method Y is valuable even if the proof is much more complicated than needed. So while simplification of complicated proofs is a noble endeavor, complicafication of simple theorems is also not without merit! Here is another example (taken from lecture notes by Spencer): Suppose you want to prove that there is always a 1-1 function from a (finite) set |A| to a set |B| when |B|>=|A|. But you want to prove it using the probabilistic method. Write |A|=n. If |B| is larger than n^2 or so you can show that a 1-1 map exist by considering a random function and applying the union bound. If |B| is larger than 6n or so you can apply the much more sophisticated Lovasz Local Lemma to get a proof. I am not aware of probabilistic proofs of this nature which works when |B| is smaller and this is an interesting challenge. - I was once flamed because I gave (in my book on Matrices) a short proof of a weak version of Perron-Frobenius' theorem (the spectral radius of a non-negative matrix is an eigenvalue, associated with a non-negative eigenvector), by using Brouwer's fixed point theorem. In my mind, that was to give students an occasion to illustrate the strength of Brouwer's theorem. Of course, there are more elementary proofs of the Perron-Frobenius theorem, even of the stronger version of it. - 4 And they fired you for this? OoOoOo,they would have been SO sued... – Andrew L Oct 17 2010 at 19:36 3 I mean that someone wrote a nasty review because of that. – Denis Serre Oct 17 2010 at 20:07 43 "Flamed", not "fired"! – Tom Smith Oct 17 2010 at 21:41 In a lecture course I saw a proof of Poincare duality by deducing it from Grothendieck duality. Proving Grothendieck duality for sheaves on topological spaces took a good part of the semester of course, and then deducing Poincare duality was still not a one liner as well, but filled an entire lecture in which we worked out what all the shrieks and derived functors were doing in terms of differential forms or singular cochains. - 11 Do you happen to have notes for that course? I think I'd actually like to see that worked out. – David Speyer Oct 23 2010 at 17:05 1 Also see people.fas.harvard.edu/~amathew/verd.pdf – David Corwin Dec 21 at 18:26 show 3 more comments Because for some reason no one has mentioned it. Russell's proof that 1+1=2. http://quod.lib.umich.edu/cgi/t/text/pageviewer-idx?c=umhistmath;cc=umhistmath;rgn=full%20text;idno=AAT3201.0001.001;didno=AAT3201.0001.001;view=pdf;seq=00000412 - 3 But maybe this is a proof that $1+1=2$ isn't such a simple fact? – Gerry Myerson Jan 29 2011 at 22:59 23 It depends on who you ask. The set theorist and logician will tell you it obvious because S({{}}) = {{},{{}}}. The category theorist and algebraist will ask "you what you mean by 1? are we assuming choice?" The geometer and topologist will tell you, its irrelevant what you call it, an apple and an apple makes two apples. And the number theorist and combinatorist will both wonder what the hell all the fuss is about. – Michael Blackmon Jan 30 2011 at 0:43 4 >The category theorist and algebraist will ask "you what you mean by 1? are we assuming choice?" I have my doubts that any category theorist or algebraist would say anything about choice here. Some might ask "what do you mean by 1", but perhaps mostly in a Socratic mode. – Todd Trimble May 8 2011 at 12:47 The sum of the degrees of the vertices of a graph is even. Proof: The number $N$ of graphs with degrees $d_1,\ldots,d_n$ is the coefficient of $x_1^{d_1}\cdots x_n^{d_n}$ in the generating function $\prod_{j\lt k}(1+x_jx_k)$. Now apply Cauchy's Theorem in $n$ complex dimensions to find that $$N = \frac{1}{(2\pi i)^n} \oint\cdots\oint \frac{\prod_{j\lt k}(1+x_jx_k)}{x_1^{d_1+1}\cdots x_n^{d_n+1}} dx_1\cdots dx_n,$$ where each integral is a simple closed contour enclosing the origin once. Choosing the circles $x_j=e^{i\theta_j}$, we get $$N = \frac{1}{(2\pi)^n} \int_{-\pi}^\pi\cdots\int_{-\pi}^\pi \frac{\prod_{j\lt k}(1+e^{\theta_j+\theta_k})}{e^{i(d_1\theta_1+\cdots +d_n\theta_n)}} d\theta_1\cdots d\theta_n.$$ Alternatively, choosing the circles $x_j=e^{i(\theta_j+\pi)}$, we get $$N = \frac{1}{(2\pi)^n} \int_{-\pi}^\pi\cdots\int_{-\pi}^\pi \frac{\prod_{j\lt k}(1+e^{\theta_j+\theta_k})}{e^{i(d_1\theta_1+\cdots +d_n\theta_n+k\pi)}} d\theta_1\cdots d\theta_n,$$ where $k=d_1+\cdots+d_n$. Since $e^{ik\pi}=-1$ when $k$ is an odd integer, we can add these two integrals to get $2N=0$. - There's hardly a book on class field theory that doesn't derive Kronecker-Weber as a corollary. Or quadratic reciprocity -) Disclaimer: I like these proofs. Seeing quadratic reciprocity through the eyes of "Fearless symmetry: exposing the hidden patterns of numbers" by Ash and Gross is an experience you wouldn't want to miss. - 6 @Franz: I'll bet you know how to prove Kronecker-Weber without deducing it from a larger edifice of class field theory over $\mathbb{Q}$, but I'm sorry to tell you that most contemporary algebraic number theorists (including me) do not. – Pete L. Clark Oct 18 2010 at 15:52 1 @Pete: I learnt number fields from a book by Marcus, and a proof is in there. IIRC there's also a proof very early on in Washington. – Kevin Buzzard Oct 18 2010 at 19:22 3 If you google for "Kronecker-Weber via Stickelberger", you'll find a modern version of Weber's classical idea combined with Hilbert's idea of twisting. Washington's proof, if I remember correctly, derives the global version from the local one. There's even a proof in the Monthly: Am. Math. Mon. 81, 601-607 (1974), and a practically unknown proof based on Eisenstein reciprocity due to Delaunay (Delone). – Franz Lemmermeyer Oct 18 2010 at 20:12 3 My favourite proof of Kronecker-Weber is the one first given by Shafarevich and reproduced by Cassels in his Local Fields. You do the local case first (as in Lecture 19 of my notes arxiv.org/abs/0903.2615) and then nothing more than the Minkowski bound on the discriminant is needed to derive the theorem over $\mathbf Q$. – Chandan Singh Dalawat Oct 19 2010 at 3:08 Dan Bernstein, "A New Proof that 83 is prime", http://cr.yp.to/talks/2003.03.23/slides.pdf - 5 So it actually only is a proof that 83 is a prime power.. Even better as an answer! – Woett Oct 15 2011 at 16:21 show 2 more comments A Turing machine is a mathematical formalization of a computer (program). If $y\in(0,1)$, a Turing machine with oracle $y$ has access to the digits of $y$, and can use them during its computations. We say that $x\le_T y$ iff there is a machine with oracle $y$ that allows us to compute the digits of $x\in(0,1)$. There are only countably many programs, so a simple diagonalization argument shows that there are reals $x$ and $y$ with $x{\not\le}_T y$ and $y{\not\le}_T x$. $(*)$ Being a set theorist, when I first learned of this notion, I couldn't help it but to come up with the following proof of $(*)$: Again by counting, every $x$ has only countably many $\le_T$-predecessors. So, if CH fails, there are Turing-incomparable reals. By the technique of forcing, we can find a (boolean valued) extension $V'$ of the universe $V$ of sets where CH fails, and so $(*)$ holds in this extension. Shoenfield's absoluteness theorem tells us that $\Sigma^1_2$-statements are absolute between (transitive) models with the same ordinals. The statement $(*)$, "there are Turing-incomparable reals" is $\Sigma^1_1$ (implementing some of the coding machinery of Gödel's proof of the 2nd incompleteness theorem), so Shoenfield's absoluteness applies to it. Working from the point of view of $V'$ and considering $V'$ and $V$, it follows that in $V'$, with Boolean value 1, $(*)$ holds in $V$. It easily follows from this that indeed `$(*)$` holds in $V$. It turns out that Joel Hamkins also found this argument, and he used it in the context of his theory of Infinite time Turing machines, for which the simple diagonalization proof does not apply. So, at least in this case, the insane proof actually was useful at the end. - 1 Also Noam Greenberg has this argument on his homepage. The simple diagonalization that you mention can actually be cast as a Baire category argument: For each Turing machine $M$, the set of pairs $(x,y)$ such that $M$ witnesses $x\leq_Ty$ is nowhere dense. Since there are only countably many Turing machines, there is a pair of incomparable Turing degrees by the Baire category theorem. – Stefan Geschke Oct 17 2010 at 19:05 1 Andres, thanks for mentioning this! My view is that many constructions in computability theory are fruitfully thought of as forcing constructions, and this is the natural destination of that view. When I teach computability theory, for example, I try when possible to set up the constructions as the problem of meeting dense sets in a partial order, specifically to emphasize this. – Joel David Hamkins Oct 19 2010 at 13:38 2 A lesser form of overkill: Instead of forcing to violate CH, just adjoin two independent Cohen reals. Neither is computable from the other, because neither is in the model of ZFC generated by the other. Then invoke absoluteness. About John Steel's comment that eliminating the machinery leaves one with a Baire category argument: If you eliminate even more machinery by inserting the proof of the Baire category theorem (for this special case), you get back to the original Kleene-Post proof. – Andreas Blass Dec 28 2010 at 20:38 show 2 more comments The proof that the reduced $C^*$-algebra of the free group has no projections has the nice corollary that the circle is connected. - 1 I sometimes wonder if the fact that the circle is connected (sorry, that the integers satisfy the Kadison-Kaplansky conjecture) is used somewhere in the building-block observations about the Fredholm index;) But maybe it isn't. – Yemon Choi Oct 17 2010 at 20:14 A recent example from MO (I found it quite entertaining) - testing primality of one and two digit numbers using Stirling's formula and Wilson's theorem (to make it even more complicated, one has to use some extensions, calculation tricks and high-precision calculations): http://mathoverflow.net/questions/42393/has-stirlings-formula-ever-been-applied-with-interesting-consequence-to-wilson - The Gauß-Bonnet theorem and the Riemann-Roch theorem for Riemann surfaces have both reasonably elementary proofs. Of course, they follow from the general Atiyah-Singer index theorem. - 1 By the index theorem, there is no nonvanishing continuous vector field on $S^{2n}$. – Steve Huntsman Oct 17 2010 at 17:35 14 But it should be noted that the discovery and the original proof of the Atiyah-Singer Theorem came from thinking about how to generalize Gauß-Bonnet-Chern and the Hirzebruch-Riemann-Roch formulas and their proof. – Dick Palais Oct 17 2010 at 17:55 13 I did read somewhere, in an expository paper, the fact that the sum of the interior angles of an Euclidean triangle to be $\pi$ stated as a Corollary to (some form of) the A-S index theorem. – Mariano Suárez-Alvarez Oct 18 2010 at 17:53 I claim that the rational canonical model of the modular curve $X(1) = \operatorname{SL}_2(\mathbb{Z}) \backslash \overline{\mathcal{H}}$ is isomorphic over $\mathbb{Q}$ to the projective line $\mathbb{P}^1$. Indeed, by work of Igusa on integral canonical models, the corresponding moduli problem (for elliptic curves) extends to give a smooth model over $\mathbb{Z}$. By a celebrated 1985 theorem of Fontaine, this implies that $X(1)$ has genus zero. Therefore it is a Severi-Brauer conic, which by Hensel's Lemma and the Riemann Hypothesis for curves over finite fields is smooth over $\mathbb{Q}_p$ iff it has a $\mathbb{Q}_p$-rational point. By the reciprocity law in the Brauer group of $\mathbb{Q}$, this implies that $X(1)$ also has $\mathbb{R}$-rational points and then by the Hasse-Minkowski theorem it has $\mathbb{Q}$-rational points. Finally, it is an (unfortunately!) very elementary fact that a smooth genus zero curve with a rational point must be isomorphic to $\mathbb{P}^1$. I did actually give an argument like this in a class I taught on Shimura varieties. Like many of the other answers here, it is ridiculous overkill in the situation described but begins to be less silly when looked at more generally, e.g. in the context of Shimura curves over totally real fields. - Theorem (ZFC + "There exists a supercompact cardinal."): There is no largest cardinal. Proof: Let $\kappa$ be a supercompact cardinal, and suppose that there were a largest cardinal $\lambda$. Since $\kappa$ is a cardinal, $\lambda \geq \kappa$. By the $\lambda$-supercompactness of $\kappa$, let $j: V \rightarrow M$ be an elementary embedding into an inner model $M$ with critical point $\kappa$ such that $M^{\lambda} \subseteq M$ and $j(\kappa) > \lambda$. By elementarity, $M$ thinks that $j(\lambda) \geq j(\kappa) > \lambda$ is a cardinal. Then since $\lambda$ is the largest cardinal, $j(\lambda)$ must have size $\lambda$ in $V$. But then since $M$ is closed under $\lambda$ sequences, it also thinks that $j(\lambda)$ has size $\lambda$. This contradicts the fact that $M$ thinks that $j(\lambda)$, which is strictly greater than $\lambda$, is a cardinal. - 2 It seems that having merely a strong cardinal suffices in your argument, Jason. It seems that this improves the upper bound on the consistency strength of the assertion that there is no largest cardinal! – Joel David Hamkins Aug 20 at 13:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 194, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309408664703369, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/47078/what-is-the-laughlin-argument
# What is the Laughlin argument? The fundamental question is Why is Hall conductance quantized? Let's start with the Hall bar, a 2D metal bar subject to a strong perpendicular magnetic field $B_0$. Let current $I$ flow in the x-direction, then the y-direction develops a voltage $V_H$. The Hall conductance is $\sigma_H = I/V_H$ To make Laughlin's charge pump, how should we wrap the Hall bar? Identify the left and right, or top and bottom sides? Based on my understanding, we should paste top and bottom side together. (correct? Figure 1. left of the Paper maybe a little confusing.) Laughlin assumes the Fermi level is in the middle of the gap, so that the ring is an insulator. But the changing flux will induce an current by taking "adiabatic derivative" of total energy w/r flux $$I = c\frac{\partial U}{\partial \Phi}$$ which flows in y-direction and where $c$ is speed of light. Following Laughlin's calculations, as one threads one flux quantum, there will be $p$ (number of filled Landau levels) electrons transported. Then $$U=peV$$ where $V$ is the potential difference of two edges. From the current formula, we find the quantized Hall conductance. The heart of the problem is What is an adiabatic derivative? Why is ${\bf j} = \partial {\cal H}/\partial {\bf A}$ valid? - 1 The form of current operator is valid because $\hat H=(p + eA)^2/2m$ and $\delta \hat H/\delta A = e \delta H/\delta p = e\hat v$. Adiabatic derivative is meant to take derivative of $U=<H>$ without worrying about Boltzmann weights, so $I=c\delta U/\delta A$. – ChenChao Dec 21 '12 at 7:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.900023877620697, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Big_Rip
# Big Rip The Big Rip is a cosmological hypothesis first published in 2003, about the ultimate fate of the universe, in which the matter of the universe, from stars and galaxies to atoms and subatomic particles, is progressively torn apart by the expansion of the universe at a certain time in the future. According to the hypothesis, the scale factor of the universe and with it all distances in the universe become infinite at a finite time in the future. ## Definition and overview The hypothesis relies crucially on the type of dark energy in the universe. The key value is the equation of state parameter $w$, the ratio between the dark energy pressure and its energy density. At $w$ < −1, the universe will eventually be pulled apart. Such energy is called phantom energy, an extreme form of quintessence. A universe dominated by phantom energy expands at an ever-increasing rate. However, this implies that the size of the observable universe is continually shrinking; the distance to the edge of the observable universe which is moving away at the speed of light from any point moves ever closer. When the size of the observable universe becomes smaller than any particular structure, no interaction by any of the fundamental forces (gravitational, electromagnetic, weak, or strong) can occur between the most remote parts of the structure. When these interactions become impossible, the structure is "ripped apart". The model implies that after a finite time there will be a final singularity, called the "Big Rip", in which all distances diverge to infinite values. The authors of this hypothesis, led by Robert Caldwell of Dartmouth College, calculate the time from the present to the end of the universe as we know it for this form of energy to be $t_{rip} - t_0 \approx \frac{2}{3|1+w|H_0\sqrt{1-\Omega_m}}$ where $w$ is defined above, H0 is Hubble's constant and Ωm is the present value of the density of all the matter in the universe. In their paper, the authors consider an example with $w$ = −1.5, H0 = 70 km/s/Mpc and Ωm = 0.3, in which case the end of the universe is approximately 22 billion years from the present. This is not considered a prediction, but a hypothetical example. The authors note that evidence indicates $w$ to be very close to −1 in our universe, which makes $w$ the dominating term in the equation. The closer that the quantity (1 + $w$) is to zero, the closer the denominator is to zero and the further the Big Rip is in the future. If $w$ were exactly equal to −1, the Big Rip could not happen, regardless of the values of H0 or Ωm. In their scenario for $w$ = −1.5, the galaxies would first be separated from each other. About 60 million years before the end, gravity would be too weak to hold the Milky Way and other individual galaxies together. Approximately three months before the end, the solar system (or systems similar to our own at this time, as the fate of our own solar system 7.5 billion years in the future is questionable) would be gravitationally unbound. In the last minutes, stars and planets would be torn apart, and an instant before the end, atoms would be destroyed.[1] ## Experimental data According to the latest cosmological data available, the uncertainties are still too large to discriminate among the three cases $w$ < −1, $w$ = −1, and $w$ > −1.[2][3] ## References 1. Caldwell, Robert R.; Kamionkowski, Marc and Weinberg, Nevin N. (2003). "Phantom Energy and Cosmic Doomsday". Physical Review Letters 91 (7): 071301. arXiv:astro-ph/0302506. Bibcode:2003PhRvL..91g1301C. doi:10.1103/PhysRevLett.91.071301. PMID 12935004. 2. Allen, S. W.; Rapetti, D. A.; Schmidt, R. W.; Ebeling, H.; Morris, R. G.; Fabian, A. C. (2008). "Improved constraints on dark energy from Chandra X-ray observations of the largest relaxed galaxy clusters". Monthly Notices of the Royal Astronomical Society 383 (3): 879. arXiv:0706.0033. Bibcode:2008MNRAS.383..879A. doi:10.1111/j.1365-2966.2007.12610.x.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8872886300086975, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/262371/how-do-you-solve-the-area-of-a-trapezoid-using-diagonals
# How do you solve the area of a trapezoid using diagonals The height of a trapezoid is $10$ cm. The lengths of the two diagonals of the trapezoid are $30$ cm and $50$ cm. Calculate the area of the trapezoid. On the homework I solved this using $${D_1D_2\over 2}$$ and my teacher marked me wrong. So I don't know what I did wrong. Please help. I know I can only use the formula if the diagonals are $90$ degrees. But how do I check that ? - that is the formula of the area of a rhombus. – lab bhattacharjee Dec 20 '12 at 5:02 ## 3 Answers To help you think about the solution, draw two parallel lines 10 cm apart. The bottom will contain the B1 and the top will contain the B2. Draw the two diagonals at the proper lengths from the base to the top line making sure they cross. Now imagine that you slide one of the diagonals along the bottom and top lines. Notice that (B2+B2)/2 does not change (i.e one stretches and one shrinks). So no matter where the diagonals cross you have the same area. Now slide them apart until the intersection point reaches the top line (i.e. B2 = 0). Now you have triangle with the same area as the trapezoid. You know the height and two sides of the triangle, a little geometry and you can compute the base and your're there. - The area will be $\frac12\cdot 10\cdot (y+x+y+z)=5(x+2y+z)$ Now, $(y+z)^2+10^2=50^2$ and $(x+y)^2+10^2=30^2$ $(y+z)=\sqrt{50^2-10^2} CM=20\sqrt6 CM$ $(x+y)=\sqrt{30^2-10^2} CM=20\sqrt2 CM$ SO, the area will be $5(20\sqrt2(\sqrt3+1)) CM^2$ - Gluing two copies of the trapezoid together with one rotated by 180 degrees can yield a parallelogram. One diagonal from each trapezoid cuts the parallelogram into two triangles, each of which has an altitude of 10 and sides adjacent to that altitude's vertex of lengths 30 and 50. So the area is the same as that of a triangle with two sides of 30 and 50, and an altitude between them of 10. Is this enough of a help? - Trying to understand what you said... Is the answer 193.1 cm? – NGPP1 Dec 20 '12 at 0:35 This is the right answer, but I don't think your gluing construction works. – Peter Shor Dec 20 '12 at 0:36 I am still confused – NGPP1 Dec 20 '12 at 0:42 @PeterShor, in my method, the area is around $386.37 cm^2$ – lab bhattacharjee Dec 20 '12 at 5:03 @PeterShor I have added imagery that demonstrates the gluing construction works. I leave the labeling of edges and confirmation to you. The trapezoid's area is the same as that of the triangle at the end of the process: a triangle with altitude 10 and two surrounding sides of length 30 and 50. – alex.jordan Dec 20 '12 at 10:05 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9326305985450745, "perplexity_flag": "head"}
http://en.wikibooks.org/wiki/User:Daviddaved/Y-%CE%94_and_star-mesh_transforms
# User:Daviddaved/Y-Δ and star-mesh transforms From Wikibooks, open books for an open world Jump to: navigation, search The effective conductivities and the Dirichlet-to-Neumann operator of a network are invariant under the following star-mesh transform, Exercise (**). Use the Schur complement formula $\Lambda = K/C$ for the Dirichlet-to-Neumann map to prove the invariance. The Y-Δ transform is a special case of the star-mesh transform in which the center node has the degree 3.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8174062967300415, "perplexity_flag": "middle"}
http://xorshammer.com/2008/08/15/axs-theorem/
August 15, 2008 · 3:58 am # Ax’s Theorem: An Application of Logic to Ordinary Mathematics There are a number of applications of logic to ordinary mathematics, with the most coming from (I believe) model theory. One of the easiest and most striking that I know is called Ax’s Theorem. Ax’s Theorem: For all polynomial functions $f\colon \mathbb{C}^n\to \mathbb{C}^n$, if $f$ is injective, then $f$ is surjective. Very rough proof sketch: The field $\mathbb{C}$ has characteristic 0, so each of the axioms $\psi_p \equiv \overbrace{1 + 1 + \cdots + 1}^{p\text{ 1's}} \ne 0$ (where $p$ is a prime) is true in $\mathbb{C}$. Suppose that some polynomial is injective but not surjective. Then there is a proof of that fact from the axioms of algebraically closed fields, together with the axioms $\psi_p$. But a proof can only use finitely many axioms. Therefore, there must be some $\psi_p$ that is not used in the proof. One can then show that there would be a polynomial function which is injective but not surjective from $F^n$ to $F^n$, where $F$ is a finite field of characteristic $p$. But this is impossible, because $F$ is a finite set. More details below. Fix a language $\mathcal{L} = \{+,-,\times, 0,1\}$. The set of axioms ACF for algebraically closed fields consists of the field axioms ($\forall x\, (x \ne 0 \implies \exists y\, x\times y = 1)$, etc.) together with, for each $d \geq 1$ an axiom $\forall c_0 \cdots \forall c_{d-1} \exists x\, x^d + c_{d-1}x^{d-1} + \cdots + c_0 = 0$ asserting that all monic polynomials of degree $d$ have a root. If $p > 0$ is a prime, then let ACFp be the axioms of ACF together with the axiom $\overbrace{1 + 1 + \cdots + 1}^{p\text{ 1's}} = 0$ asserting that the field has characteristic $p$. Call that axiom $\psi_p$. Let ACF0 be the set of axioms of ACF together with $\neg \psi_p$ for each $p$. (This asserts that the field has characteristic 0). We now have the following lemma, provable by (essentially) logical means. Lemma. All of the theories ACFp and ACF0 are complete, i.e., for any first-order sentence $\phi$ in the language of $\mathcal{L}$, either the theory proves $\phi$ or the theory proves $\neg\phi$. $\square$ Proof of Ax’s Theorem. For each $d$ and $n$, let $\phi_{d,n}$ be a formula asserting that all $n$-tuples of polynomials of degree $d$ in $n$ variables which are injective are surjective. First we show that ACFp proves each $\phi_{d,n}$. First observe that $\phi_{d,n}$ is true in each finite field of characteristic $p$ just by virtue of it being a finite set. Since the algebraic closure of $F_p$ (the field with $p$ elements) is a union of finite fields of characteristic $p$, it is true in that field as well: if there is some injective and non-surjective polynomial function, simply pick a finite field of characteristic $p$ large enough to contain all the coefficients of the polynomial and to witness its non-surjectivity in order to get a contradiction. Since ACFp is complete and there is an algebraically closed field of characteristic $p$ in which $\phi_{d,n}$ is true, it follows that ACFp proves $\phi_{d,n}$. Now, assume that some $\phi_{d,n}$ wasn’t true in $\mathbb{C}$. Then ACF0 would prove $\neg \phi_{d,n}$. But the proof would have to use only finitely many axioms $\psi_p$. If $p_0$ is a prime greater than each $p$ such that $\psi_p$ is used in the proof, then ACFp0 proves $\neg \phi_{d,n}$, contrary to the result of the above paragraph. $\square$ For more information, see David Marker’s introductory notes on model theory here. ### Like this: 4 Comments Filed under Model Theory ### 4 Responses to Ax’s Theorem: An Application of Logic to Ordinary Mathematics 1. rjlipton I linked to your proof. One of my favorite theorems/proofs. So cool that such a simple idea can be made to work. 2. Pingback: Mathematical Embarrassments « Gödel’s Lost Letter and P=NP 3. Pingback: Cups, Peas, and Grothendieck « Gödel’s Lost Letter and P=NP 4. This is a very nice theorem. One thing that I’ve been wondering about in this context is whether there are any examples of injective polynomials f that aren’t surjective. Incidentally, one way to think of this theorem is as a generalization of the fundamental theorem of algebra since in the single variable case all polynomials are surjective. Similarly, one gets from Picard’s theorem that all injective analytic functions are surjective (consider f composed with itself). %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 55, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320326447486877, "perplexity_flag": "head"}
http://psychology.wikia.com/wiki/First-order_logic?oldid=54339
# First-order logic Talk0 31,735pages on this wiki Revision as of 17:05, April 16, 2007 by Dr Joe Kiff (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Philosophy Index: Aesthetics · Epistemology · Ethics · Logic · Metaphysics · Consciousness · Philosophy of Language · Philosophy of Mind · Philosophy of Science · Social and Political philosophy · Philosophies · Philosophers · List of lists First-order logic (FOL) is a language in symbolic science, which is used by mathematicians, philosophers, linguists, and computer scientists. It goes by many names, including: first-order predicate calculus (FOPC), the lower predicate calculus, the language of first-order logic or predicate logic. The most used name is however FOL, pronounced ef-oh-el. FOL is a system of deduction extending propositional logic by the ability to express relations between individuals (e.g. people, numbers, and "things") more generally. Propositional logic is not adequate for formalizing valid arguments that rely on the internal structure of the propositions involved. For example, a translation of the valid argument • All men are mortal • Socrates is a man • Therefore, Socrates is mortal into propositional logic yields • A • B • ∴ C which is invalid (∴ means "therefore"). The rest of the article explains how FOL is able to handle these sorts of arguments. The atomic sentences of first-order logic have the form P(t1, ..., tn) (a predicate with one or more "arguments") rather than being propositional letters as in propositional logic. This is usually written without parentheses or commas, as below. The new ingredient of first-order logic not found in propositional logic is quantification: where φ is any (well-formed) formula, the new constructions ∀x φ and ∃x φ — read "for all x, φ" and "for some x, φ" — are introduced, where x is an individual variable whose range is the set of individuals of some given universe of discourse (or domain). For example, if the universe consists solely of people, then x ranges over people. For convenience, we write φ as φ(x) to show that it contains only the variable x free and, for b a member of the universe, we let φ[b] express that b satisfies (i.e. has the property expressed by) φ. Then ∀x φ(x) states that φ[b] is true for every b in the universe, and ∃x φ(x) means that there is a b (in the universe) such that φ[b] holds. The argument about Socrates can be formalized in first-order logic as follows. Let the universe of discourse be the set of all people, living and deceased, and let Man(x) be a predicate (which, informally, means that the person represented by variable x is a man) and Mortal(x) be a second predicate. Then the argument above becomes • ∀ x (Man(x) → Mortal(x)) • Man(Socrates) • ∴ Mortal(Socrates) A literal translation of the first line would be "For all x, if x is described by 'Man', x must also be described by 'Mortal'." The second line states that the predicate "Man" applies to Socrates, and the third line translates to "Therefore, the description 'Mortal' applies to Socrates." Note that in first-order logic, Man(x) is not a function, as functions convert inputs to outputs through a specific process. Man(x) simply means that x is described by "Man", or that x is (a) "Man", or that x falls under the category of "Man". First-order logic has sufficient expressive power for the formalization of virtually all of mathematics. A first-order theory consists of a set of axioms (usually finite or recursively enumerable) and the statements deducible from them. The usual set theory ZFC is an example of a first-order theory, and it is generally accepted that all of classical mathematics can be formalized in ZFC. There are other theories that are commonly formalized independently in first-order logic (though they do admit implementation in set theory) such as Peano arithmetic. ## Defining first-order logic A predicate calculus consists of • formation rules (i.e. recursive definitions for forming well-formed formulas). • transformation rules (i.e. inference rules for deriving theorems). • axioms or axiom schemata (possibly countably infinite). The axioms considered here are logical axioms which are part of classical FOL. Further, non-logical axioms are added to yield specific first-order theories that are based on the axioms of classical FOL (and hence are called classical first-order theories, such as classical set-theory). The axioms of first-order theories are not regarded as truths of logic per se, but rather as truths of the particular theory that usually has associated with it an intended interpretation of its non-logical symbols. (See an analogous idea at logical versus non-logical symbols.) Classical FOL does not have associated with it an intended interpretation of its non-logical vocabulary (except arguably a symbol denoting identity, depending on whether one regards such a symbol as logical). When the set of axioms is infinite, it is required that there be an algorithm which can decide for a given well-formed formula whether or not it is an axiom. There should also be an algorithm which can decide whether a given application of an inference rule is correct or not. It is important to note that FOL can be formalized in many equivalent ways; there is nothing canonical about the axioms and rules of inference given here. There are infinitely many equivalent formalizations all of which yield the same theorems and non-theorems, and all of which have equal right to the title 'FOL'. ## Vocabulary The "vocabulary" is composed of 1. A set of predicate variables (or relations) each with some valence (or arity) ≥1, which are often denoted by uppercase letters P, Q, R,... . 2. A set of constants, often denoted by lowercase letters at the beginning of the alphabet a, b, c,... . 3. A set of functions, each of some valence ≥ 1, which are often denoted by lowercase letters f, g, h,... . 4. An infinite set of variables, often denoted by lowercase letters at the end of the alphabet x, y, z,... . 5. Symbols denoting logical operators (or connectives): $\neg$ (logical not), $\wedge$ (logical and), $\vee$ (logical or), → (logical conditional), ↔ (logical biconditional). 6. Symbols denoting quantifiers: $\forall$ (universal quantification), $\exists$ (existential quantification). 7. Left and right parenthesis. 8. An identity or equality symbol = is sometimes but not always included in the vocabulary. There are several minor variations listed below: • The set of primitive symbols (operators and quantifiers) often varies. Some symbols may be omitted as primitive and taken as abbreviations instead; e.g. (P ↔ Q) is an abbreviation for (P → Q) $\wedge$ (Q → P). On the other hand, it is possible to include other operators as primitive, such as the truth constants ⊤ for "true" and ⊥ for "false" (these are operators of valence 0), or the Sheffer stroke (P | Q). The minimum number of primitive symbols needed is one, but if we restrict ourselves to the operators listed above, we need three; for example, ¬, ∧, and ∀ suffice. • Some older books and papers use the notation φ ⊃ ψ for φ → ψ, ~φ for ¬φ, φ & ψ for φ ∧ ψ, and a wealth of notations for quantifiers; e.g., ∀x φ may be written as (x)φ. • Equality is sometimes considered to be a part of first order logic; if it is, then the equality symbol is included in the vocabulary, and behaves syntactically as a binary predicate. This case is sometimes called first order logic with equality. • Constants are really the same as functions of valence 0, so it would be possible and convenient to omit constants and allow functions to have any valence. But it is traditional to use the term "function" only for functions of valence at least 1. • In the definition above relations must have valence at least 1. It is possible to allow relations of valence 0; these could be considered as propositional variables. • There are many different conventions about where to put parentheses; for example, one might write ∀x or (∀x). Sometimes one uses colons or full stops instead of parentheses to make formulas unambiguous. One interesting but rather unusual convention is "Polish notation", where one omits all parentheses, and writes ∧, ∨, and so on in front of their arguments rather than between them. Polish notation is compact and elegant, but rare because it is hard for humans to read it. • A technical observation (of which one can make what one will, philosophically) is that if there is a function symbol of arity 2 representing an ordered pair (or predicate symbols of arity 2 representing the projection relations of an ordered pair) then one can dispense entirely with functions or predicates of arity > 2. Of course the pair or projections need to satisfy the natural axioms. The sets of constants, functions, and relations are usually considered to form a language, while the variables, logical operators, and quantifiers are usually considered to belong to the logic. For example, the language of group theory consists of one constant (the identity element), one function of valence 1 (the inverse) one function of valence 2 (the product), and one relation of valence 2 (equality), which would be omitted by authors who include equality in the underlying logic. ## Formation rules The formation rules define the terms, formulas and free variables in them as follows. Terms: The set of terms is recursively defined by the following rules: 1. Any constant is a term. 2. Any variable is a term. 3. Any expression f(t1,...,tn) of n ≥ 1 arguments (where each argument ti is a term and f is a function symbol of valence n) is a term. 4. Closure clause: Nothing else is a term. Well-formed Formulas: The set of well-formed formulas (usually called wffs or just formulas) is recursively defined by the following rules: 1. Simple and complex predicates If P is a relation of valence n ≥ 1 and the ai are terms then P(a1,...,an) is well-formed. If equality is considered part of logic, then (a1 = a2) is well formed. All such formulas are said to be atomic. 2. Inductive Clause I: If φ is a wff, then ¬φ is a wff. 3. Inductive Clause II: If φ and ψ are wffs, then (φ → ψ) is a wff. 4. Inductive Clause III: If φ is a wff and x is a variable, then ∀x φ is a wff. 5. Closure Clause: Nothing else is a wff. 1. Atomic formulas If φ is an Atomic formula then x is free in φ if and only if x occurs in φ. 2. Inductive Clause I: x is free in ¬φ if and only if x is free in φ. 3. Inductive Clause II: x is free in (φ → ψ) if and only if x is free in φ or x is free in ψ. 4. Inductive Clause III: x is free in ∀y φ if and only if x is free in φ and xǂy. 5. Closure Clause: if x is not free in φ then it is bound. Since ¬(φ → ¬ψ) is logically equivalent to (φ ∧ ψ), (φ ∧ ψ) is often used as a short hand. The same principle is behind (φ ∨ ψ) and (φ ↔ ψ). Also ∃x φ is a short hand for ¬∀y ¬φ. In practice, if P is a relation of valence 2, we often write "a P b" instead of "P a b"; for example, we write 1 < 2 instead of <(1 2). Similarly if f is a function of valence 2, we sometimes write "a f b" instead of "f(a b)"; for example, we write 1 + 2 instead of +(1 2). It is also common to omit some parentheses if this does not lead to ambiguity. Sometimes it is useful to say that "P(x) holds for exactly one x", which can be expressed as ∃!x P(x). This can also be expressed as ∃x (P(x) ∧ ∀y (P(y) → (x = y))) . Examples: The language of ordered abelian groups has one constant 0, one unary function −, one binary function +, and one binary relation ≤. So • 0, x, y are atomic terms • +(x, y), +(x, +(y, −(z))) are terms, usually written as x + y, x + y − z • =(+(x, y), 0), ≤(+(x, +(y, −(z))), +(x, y)) are atomic formulas, usually written as x + y = 0, x + y - z ≤ x + y, • (∀x ∃y ≤( +(x, y), z)) ∧ (∃x =(+(x, y), 0)) is a formula, usually written as (∀x ∃y x + y ≤ z) ∧ (∃x x + y = 0). ## Substitution If t is a term and φ(x) is a formula possibly containing x as a free variable, then v φ(t) is defined to be the result of replacing all free instances of x by t, provided that no free variable of t becomes bound in this process. If some free variable of t becomes bound, then to substitute t for x it is first necessary to change the names of bound variables of φ to something other than the free variables of t. To see why this condition is necessary, consider the formula φ(x) given by ∀y y ≤ x ("x is maximal"). If t is a term without y as a free variable, then φ(t) just means t is maximal. However if t is y the formula φ(y) is ∀y y ≤ y which does not say that y is maximal. The problem is that the free variable y of t (=y) became bound when we substituted y for x in φ(x). So to form φ(y) we must first change the bound variable y of φ to something else, say z, so that φ(y) is then ∀z z ≤ y. Forgetting this condition is a notorious cause of errors. ## Equality There are several different conventions for using equality (or identity) in first order logic. This section summarizes the main ones. The various conventions all give essentially the same results with about the same amount of work, and differ mainly in terminology. • The most common convention for equality is to include the equality symbol as a primitive logical symbol, and add the axioms for equality to the axioms for first order logic. The equality axioms are x = x x = y → f(...,x,...) = f(...,y,...) for any function f x = y → (P(...,x,...) → P(...,y,...)) for any relation P (including = itself) • The next most common convention is to include the equality symbol as one of the relations of a theory, and add the equality axioms to the axioms of the theory. In practice this is almost indistinguishable from the previous convention, except in the unusual case of theories with no notion of equality. The axioms are the same, and the only difference is whether one calls some of them logical axioms or axioms of the theory. • In theories with no functions and a finite number of relations, it is possible to define equality in terms of the relations, by defining the two terms s and t to be equal if any relation is unchanged by changing s to t in any argument. For example, in set theory with one relation ∈, we would define s = t to be an abbreviation for ∀x (s ∈ x ↔ t ∈ x) ∧ ∀x (x ∈ s ↔ x ∈ t). This definition of equality then automatically satisfies the axioms for equality. • In some theories it is possible to give ad hoc definitions of equality. For example, in a theory of partial orders with one relation ≤ we could define s = t to be an abbreviation for s ≤ t ∧ t ≤ s. ## Inference rules The inference rule modus ponens is the only one required from propositional logic for the formalization given here. It states that if φ and φ → ψ are both proved, then one can deduce ψ. The inference rule called Universal Generalization is characteristic of the predicate calculus. It can be stated as if $\vdash \varphi$, then $\vdash \forall x \, \varphi$ where φ is supposed to stand for an already-proven theorem of predicate calculus. Notice that Generalization is analogous to the Necessitation Rule of modal logic, which is if $\vdash P$, then $\vdash \Box P$. ## Quantifier axioms The following four logical axioms characterize a predicate calculus: • PRED-1: (∀x Z(x)) → Z(t) • PRED-2: Z(t) → (∃x Z(x)) • PRED-3: (∀x (W → Z(x))) → (W → ∀x Z(x)) • PRED-4: (∀x (Z(x) → W)) → (∃x Z(x) → W) These are actually axiom schemata: the expression W stands for any wff in which x is not free, and the expression Z(x) stands for any wff with the additional convention that Z(t) stands for the result of substitution of the term t for x in Z(x). ## Predicate calculus The predicate calculus is an extension of the propositional calculus that defines which statements of first order logic are provable. It is a formal system used to describe mathematical theories. If the propositional calculus is defined with a suitable set of axioms and the single rule of inference modus ponens (this can be done in many different ways), then the predicate calculus can be defined by appending some additional axioms and the additional inference rule "universal generalization". More precisely, as axioms for the predicate calculus we take: • All tautologies from the propositional calculus (with the proposition variables replaced by formulas). • The axioms for quantifiers, given above. • The axioms for equality given above, if equality is regarded as a logical concept. A sentence is defined to be provable in first order logic if it can be obtained by starting with the axioms of the predicate calculus and repeatedly applying the inference rules "modus ponens" and "universal generalization". If we have a theory T (a set of statements, called axioms, in some language) then a sentence φ is defined to be provable in the theory T if a ∧ b ∧ ... → φ is provable in first order logic, for some finite set of axioms a, b,... of the theory T. One apparent problem with this definition of provability is that it seems rather ad hoc: we have taken some apparently random collection of axioms and rules of inference, and it is unclear that we have not accidentally missed out some vital axiom or rule. Gödel's completeness theorem assures us that this is not really a problem: the theorem states that any statement true in all models is provable in first order logic. In particular, any reasonable definition of "provable" in first order logic must be equivalent to the one above (though it is possible for the lengths of proofs to differ vastly for different definitions of provability). There are many different (but equivalent) ways to define provability. The definition above is a typical example of a "Hilbert style" calculus, which has a lot of different axioms but very few rules of inference. The "Gentzen style" predicate calculus differs in that it has very few axioms but many rules of inference. ### Identities $\lnot \forall x \, P(x) \Leftrightarrow \exists x \, \lnot P(x)$ $\lnot \exists x \, P(x) \Leftrightarrow \forall x \, \lnot P(x)$ $\forall x \, \forall y \, P(x,y) \Leftrightarrow \forall y \, \forall x \, P(x,y)$ $\exists x \, \exists y \, P(x,y) \Leftrightarrow \exists y \, \exists x \, P(x,y)$ $\forall x \, P(x) \land \forall x \, Q(x) \Leftrightarrow \forall x \, (P(x) \land Q(x))$ $\exists x \, P(x) \lor \exists x \, Q(x) \Leftrightarrow \exists x \, (P(x) \lor Q(x))$ $P \land \exists x \, Q(x) \Leftrightarrow \exists x \, (P \land Q(x))$ (where the variable $x$ must not occur free in $P$) $P \lor \forall x \, Q(x) \Leftrightarrow \forall x \, (P \lor Q(x))$ (where the variable $x$ must not occur free in $P$) ### Inference rules $\exists x \, \forall y \, P(x,y) \Rightarrow \forall y \, \exists x \, P(x,y)$ $\forall x \, P(x) \lor \forall x \, Q(x) \Rightarrow \forall x \, (P(x) \lor Q(x))$ $\exists x \, (P(x) \land Q(x)) \Rightarrow \exists x \, P(x) \land \exists x \, Q(x)$ $\exists x \, P(x) \land \forall x \, Q(x) \Rightarrow \exists x \, (P(x) \land Q(x))$ $\forall x \, P(x) \Rightarrow P(c)$ (If c is a variable, then it must not already be quantified somewhere in P(x)) $P(c) \Rightarrow \exists x \, P(x)$ (x must not appear free in P(c)) ## Metalogical theorems of first-order logic Some important metalogical theorems are listed below in bulleted form. 1. Unlike the propositional logic, first-order logic is undecidable, provided that the language has at least one predicate of valence at least 2 that is not equality. There is no decision procedure that correctly determines whether an arbitrary formula is valid (this is related to the unsolvability of the Halting problem). This result was established independently by Church and Turing. 2. The decision problem for validity is semidecidable; in other words, there is a Turing machine that when given any sentence as input, will halt if and only if the sentence is valid (true in all models). • As Gödel's completeness theorem shows, for any valid formula P, P is provable. Conversely, assuming consistency of the logic, any provable formula is valid. • For a finite or semienumerable set of axioms, the set of provable formulas can be explicitly enumerated by a Turing machine, thus the result. 3. Monadic predicate logic (i.e., predicate logic with only predicates of one argument) is decidable. 4. The Bernays–Schönfinkel class of first order formulas is also decidable. ## Comparison with other logics • Typed first order logic allows variables and terms to have various types (or sorts). If there are only a finite number of types, this does not really differ much from first order logic, because one can describe the types with a finite number of unary predicates and a few axioms. Sometimes there is a special type Ω of truth values, in which case formulas are just terms of type Ω. • Weak second-order logic allows quantification over finite subsets. • Monadic second-order logic allows quantification over subsets, or in other words over unary predicates. • Second-order logic allows quantification over subsets and relations, or in other words over all predicates. For example, the axiom of extensionality can be stated in second-order logic as x = y ≡def ∀P (P(x) ↔ P(y)). The strong semantics of second order logic give such sentences a much stronger meaning than first-order semantics. • Higher order logics allows quantification over higher types than second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, etc. • Intuitionistic first order logic uses intuitionistic rather than classical propositional calculus; for example, ¬¬φ need not be equivalent to φ. • Modal logic has extra modal operators with meanings which can be characterised informally as, for example "it is necessary that φ" and "it is possible that φ". • In Monadic predicate calculus predicates are restricted to having only one argument. • Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory. • First order logic with extra quantifiers has new quantifiers Qx,..., with meanings such as "there are many x such that ...". Also see branched quantification and the plural quantifiers of George Boolos and others. • Independence-friendly logic is characterized by branching quantifiers, which allow one to express independence between quantified variables. Most of these logics are in some sense extensions of first order logic: they include all the quantifiers and logical operators of first order logic with the same meanings. Lindström showed first order logic has no extensions (other than itself) that satisfy both the compactness theorem and the downward Löwenheim-Skolem theorem. A precise statement of Lindström's theorem requires listing several pages of technical conditions that the logic is assumed to satisfy; for example, changing the symbols of a language should make no essential difference to which sentences are true. First order logic in which no atomic sentence lies in the scope of more than three quantifiers, has the same expressive power as the relation algebra of Tarski and Givant (1987). They also show that FOL with a primitive ordered pair, and a relation algebra including the two ordered pair projection relations, are equivalent. ## References • Jon Barwise and John Etchemendy, 2000. Language Proof and Logic. CSLI (University of Chicago Press) and New York: Seven Bridges Press. • David Hilbert and Wilhelm Ackermann 1950. Principles of Theoretical Logic (English translation). Chelsea. The 1928 first German edition was titled Grundzüge der theoretischen Logik. • Wilfrid Hodges, 2001, "Classical Logic I: First Order Logic," in Lou Goble, ed., The Blackwell Guide to Philosophical Logic. Blackwell. # Photos Add a Photo 6,465photos on this wiki • by Dr9855 2013-05-14T02:10:22Z • by PARANOiA 12 2013-05-11T19:25:04Z Posted in more... • by Addyrocker 2013-04-04T18:59:14Z • by Psymba 2013-03-24T20:27:47Z Posted in Mike Abrams • by Omaspiter 2013-03-14T09:55:55Z • by Omaspiter 2013-03-14T09:28:22Z • by Bigkellyna 2013-03-14T04:00:48Z Posted in User talk:Bigkellyna • by Preggo 2013-02-15T05:10:37Z • by Preggo 2013-02-15T05:10:17Z • by Preggo 2013-02-15T05:09:48Z • by Preggo 2013-02-15T05:09:35Z • See all photos See all photos >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9128150939941406, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/2229/if-photons-have-no-mass-how-can-they-have-momentum/30430
# If photons have no mass, how can they have momentum? As an explanation of why a large gravitational field (such as a black hole) can bend light, I have heard that light has momentum. This is given as a solution to the problem of only massive objects being affected by gravity. However, momentum is the product of mass and velocity, so, by this definition, massless photons cannot have momentum. How can photons have momentum? How is this momentum defined (equations)? - 10 Note: one way to resolving this apparent paradox is by noticing that SR doesn't allow massive particle going faster than the speed of light. That means that speed of light is the limit and in certain sense it is infinite (this sense can be made precise by the concept of rapidity). Now, it shouldn't be surprising anymore that the momentum $p = mv$ can be non-zero, because you multiply zero times infinity (again in the above sense) and so in general any result is possible. This is by no means a complete explanation, just something that should give you little intuition. – Marek Dec 24 '10 at 14:38 2 – Ben Crowell Aug 8 '11 at 15:22 3 @Marek: A better way to formalize the intuition you've expressed is that $p=m\gamma v$, where $m$ is the invariant rest mass. What approaches infinity is not $v$ but $\gamma$. If you evaluate $\lim_{v\rightarrow c}m\gamma v$, while holding $E=m\gamma c^2$ constant, you do get $E/c$. – Ben Crowell Aug 8 '11 at 15:55 1 @Ben: well, it's true that this formalizes the argument but it is still unphysical since now you are letting $m \to 0$ and it's not clear what this means physically. Also, one can increase $\gamma$ simply by moving away from the observed stationary particle (and obviously holding $m$ fixed) which gives $E \to \infty$ but naturally this doesn't tell you anything about the particle itself as all of $E$ comes from your moving away. The bottom line is: the fact can't really be derived as a limit of somenthing else. It can either be understood by intuition or else derived directly from SR. – Marek Aug 11 '11 at 12:03 OK,, let me get this straight photons have mass, and yet they don't. It seems to me a quandary, but if we consider 1 photon is traveling a C, then any mass in the vector of travel at the speed of light becomes infinite while the mass in any perpendicular vector becomes 0. This basic understanding seems to support both views. and since a photon cannot exist if it isn't moving,, we have our answer. – user8499 Apr 5 '12 at 23:33 ## 8 Answers There are two important concepts here that explain the influence of gravity on light (photons). 1. The theory of Special Relativity, proved in 1905 (or rather the 2nd paper of that year on the subject) gives an equation for the relativistic energy of a particle; $$E^2 = (m_0 c^2)^2 + p^2 c^2$$ where $m_0$ is the rest mass of the particle (0 in the case of a photon). Hence this reduces to $E = pc$. Einstein also introduced the concept of relativistic mass (and the related mass-energy equivalence) in the same paper; we can then write $$m c^2 = pc$$ where $m$ is the relativistic mass here, hence $$m = p/c$$ In other words, a photon does have relativistic mass proportional to its momentum. 2. De Broglie's relation, an early result of quantum theory (specifically wave-particle duality), states that $$\lambda = h / p$$ where $h$ is simply Planck's constant. This gives $$p = h / \lambda$$ Hence combining the two results, we get $$m = E / c^2 = h / \lambda c$$ again, paying attention to the fact that $m$ is relativistic mass. And here we have it: photons have 'mass' inversely proportional to their wavelength! Then simply by Newton's theory of gravity, they have gravitational influence. (To dispel a potential source of confusion, Einstein specifically proved that relativistic mass is an extension/generalisation of Newtonian mass, so we should conceptually be able to treat the two the same.) There are a few different ways of thinking about this phenomenon in any case, but I hope I've provided a fairly straightforward and apparent one. (One could go into general relativity for a full explanation, but I find this the best overview.) - 3 This heuristic argument may help convince the questioner that there is some notion of momentum (if not rest mass) attached to photons, but it is incorrect to invoke Newton's theory of gravity to say they have gravitational influence. They do not affect Newton's theory of gravity. The curved trajectories of photons can only be described with special relativity. There is no mechanism within Newton's theory to assert that a shiny flashlight will attract a marble. (In GR, the flashlight beam has a non-zero stress-energy tensor, which will affect nearby trajectories. Hard to say exactly how.) – Eric Zaslow Dec 24 '10 at 15:39 @Eric: Indeed, the argument almost totally concerns Special Relativity. SR has nothing to say about gravity though, so I've invoked Newton's theory to create a semi-classical explanation. (One could get into GR, but it's not strictly necessary to understand the concept.) – Noldorin Dec 24 '10 at 15:42 2 E=pc has nothing to do with quantum mechanics. It's a purely classical feature of SR. To derive E=pc, just set $m_o=0$ in your first equation, and you're done. The discussion about gravity is also irrelevant, since E=pc is about SR, not GR. It's also incorrect to imagine that the energy $E$ of a photon can be used to find an equivalent gravitational mass via $E=mc^2$; the source of gravitational fields in GR is the stress-energy tensor, not the scalar mass-energy. – Ben Crowell Aug 8 '11 at 15:18 1 @Ben: the "stress energy tensor" has the energy density as it's 00 component, and the integral of that over space is the relativistic mass. A light wave has a complicated gravitational field, because it is moving fast, but two light-waves moving in opposite directions which are at a box gravitate exactly like a point mass with their "relativistic mass" GM/r^2. "Relativistic mass" is just another name for the energy for a person who has not internalize their equivalence yet. – Ron Maimon Aug 11 '11 at 7:50 1 @Ron Maimon: No, the integral of $T_{00}$ over space is not the relativistic mass in GR (and you can't get away with SR here because Noldorin is talking about gravitational fields). There are various ways of defining a scalar, conserved mass-energy in GR (ADM mass, Komar mass, ...), and none of them are simply a volume integral of $T_{00}$. – Ben Crowell Aug 14 '11 at 23:26 show 9 more comments The answer to this question is simple and requires only SR, not GR or quantum mechanics. In units with $c=1$, we have $m^2=E^2-p^2$, where $m$ is the invariant mass, $E$ is the mass-energy, and $p$ is the momentum. In terms of logical foundations, there is a variety of ways to demonstrate this. One route starts with Einstein's 1905 paper "Does the inertia of a body depend upon its energy-content?" Another method is to start from the fact that a valid conservation law has to use a tensor, and show that the energy-momentum four-vector is the only tensor that goes over to Newtonian mechanics in the appropriate limit. Once $m^2=E^2-p^2$ is established, it follows trivially that for a photon, with $m=0$, $E=|p|$, i.e., $p=E/c$ in units with $c \ne 1$. A lot of the confusion on this topic seems to arise from people assuming that $p=m\gamma v$ should be the definition of momentum. It really isn't an appropriate definition of momentum, because in the case of $m=0$ and $v=c$, it gives an indeterminate form. The indeterminate form can, however, be evaluated as a limit in which $m$ approaches 0 and $E=m\gamma c^2$ is held fixed. The result is again $p=E/c$. - 2 +1 for sensibility. Most answers seem to think we need to justify light having some kind of positive mass for some reason. Though that can be done, relativistic mass isn't very useful as a concept. – Stan Liou Aug 12 '11 at 1:25 The reason why the path of photons is bent is that the space in which they travel is distorted. The photons follow the shortest possible path (called a geodesic) in bent space. When the space is not bent, or flat, then the shortest possible path is a straight line. When the space is bent with some spherical curvature, the shortest possible path lies actually on an equatorial circumference. Note, this is in General Relativity. In Newtonian gravitation, photons travel in straight lines. We can associate a momentum of a photon with the De Broglie's relation $$p=\frac{h}{\lambda}$$ where $h$ is Planck's constant and $\lambda$ is the wavelength of the photon. This also allows us to associate a mass: $$m=p/c=h/(\lambda c)$$ If we plug in this mass into the Newtonian gravitational formula, however, the result is not compatible with what is actually measured by experimentation. - 3 This is a somewhat confused answer. The most obvious explanation for photon momentum is due to special relativity. – Noldorin Dec 24 '10 at 13:34 1 Actually, many people thought light IS bent in Newtonian gravity, just like any massive particle with finite velocity. The finite velocity of light had been known since Galileo. – Jeremy Dec 24 '10 at 14:34 2 @Jeremy: except that Newtonian gravity can't explain GR at all. There are some accidental equalities, like the equity of Schwarzschild radius in both theories. But in other cases values differ. E.g. Newtonian picture predicts twice smaller bending of light than GR (this is again related to the above-mentioned fact that light behaves differently in GR than massive particles) and this shows that any similarity is purely accidental and trying to explain GR effects with Newtonian picture serves only to hide the true concepts that happen in the curved space-time. – Marek Dec 24 '10 at 14:47 1 @Bruce: I have to disagree purely on the grounds that this answer is very sketchy. It gives next to no justification or depth unfortunately. GR definitely has something to say here, but it's perhaps not the primary point. – Noldorin Dec 24 '10 at 15:43 3 @Noldorin: We'll just have to agree to disagree than. I think that an explanation that gets the numbers clearly wrong is a wrong explanation (wrong as in: it coincidentally predicted the existence of this event, but trying to do the same for other stuff can lead the person to predict the existence of nonexistent phenomena). And I think that using this explanation without disclaimers in bold to warn the reader is, indeed, harmful. – Bruce Connor Dec 24 '10 at 16:14 show 13 more comments Until a good specialist comes and clean up this mess, I am going to touch in point that was not raised yet. Momentum is the Noether charge of translational invariance. If you consider only massive entities in system Lagrangian, it will be noticed that the Lagrangian is not invariant over shift translation unless you add the electromagnetic field Lagrangian. - 2 This answer is way too confused, so -1. Both massive and massless particles can have translation invariant dynamics (indeed, all of our state-of-the-art theories respect even much stronger Poincaré symmetry) and so there will be conserved charges called momentum in every case. But what precisely is the momentum depends on the precise dynamics of the theory. This is why these notions of momentum differ between classical and special relativistic theories and between massive and massless theories. – Marek Dec 25 '10 at 0:18 If Newton's gravitation could define the bending of light by gravity, then the general relativity wouldn't have come up. Photons don't have mass and it's clear from the fact that it travels at the speed of light. Gravity is an illusion that seems to attract things but in fact it bends spacetime; which is why a straight path seems curved. Newton's law of gravitation is still used because it's simple and we seldom encounter such massive objects like black holes in practical life, for which it does not hold. - In my opinion it is not necessary to evoke the theory of relativity or quantum physiscs to explain how light can have momentum but no mass In the 19th century it was already known that light can collide with matter A beam of light can make a small wheel, in vacuum, rotating the classic mechanics key parameter for the study of collisions is the momentum : q = mv, Momentum being alwayes conservative in an insolated system The natural queation is: Can the principle of conservation of the momentum be extended also to electromagnetic radiations ? From the experience you know that the answer is positive provide you define the momentum of light as ```` q = L/c ```` Where L is the energy of light and c the light speed Can you extend the analogy assuming that light has also mass ? The assumption is reasonable, in case of positive answer you get the Einstein's equation ```` m = L/c^2 ```` However you are not allowed to make such extension since in Physics you must stick to the experimental evidences There is no evidence that light has also mass If so , how do you solve the paradox ? The light momentum and the momentum of a material particle are not the same thing - Light doesn't have momentum in the normal sense that matter has. Frequency and Wavelength soak up momentum. The more energetic the higher the frequency. Wavelength can change even though light stays at C .. Light doesn't bend, but space can be deformed. A better question is how can space be deformed when space has even less energy, mass, or wavelength than light does?? Space has no mass, momentum, yet changes (grows) between galaxies in the voids between them. Space takes a tremendous amount of energy to deform it, but what is causing space to be deformed? What is gravity exactly? There is something about space and time that is linked together. Time slows down as you encounter a gravitational field. How can time have energy to distort space? Where does the energy that time has come from and where does it go? What is time exactly? Time and gravity are unknowns yet have a constant predictable nature in physics, except quantum physics. A singular particle can phase into different multiverses popping back into existence by probability. How can a particle exist in two different locations at the same time? Where are multi-verses in relation to this universe located? Is the Weak Nuclear Force weak because it exists in all multi-verses simultaneously in comparison to the other forces in physics that can only exist in one universe at time? I have about a half dozen more questions but I will just stop at this point. The more we know about universe the bigger the unknown grows. - 1 Even pure classical (i.e. Maxwell's equations) light has well defined momentum, a fact which has been known since the end of the 19th century. – dmckee♦ Jul 6 '12 at 13:17 Of course they have mass. When saying "photons have no mass" in LHC rap, they were referring to the rest mass, it just didn't rhyme. (If you pack a bunch of photons into your mirror-coated box, it will be heavier, by E/mc^2 as usual) - 1 Note that particle physicists, cosmologist and other specialist in relativity do not use the term "relativistic mass" at all finding the term to be definable but not useful. There is just $\text{mass} = (E,\vec{p})^2$ and energy and momentum and kinetic energy. – dmckee♦ May 29 '11 at 1:44 "If you pack a bunch of photons into your mirror-coated box, it will be heavier, by E/mc^2 as usual." This is sort of true, but not really relevant. What is different between a photon and a photon-filled box is that the former has zero invariant mass, while the latter has a nonzero invariant mass. (Invariant mass isn't additive.) "Heavier" is also somewhat irrelevant and misleading. The question relates to inertial mass, not gravitational mass. – Ben Crowell Aug 8 '11 at 17:36 In the general case, it won't even be more massive "by $E/c^2$ "as usual." Mass is just the magnitude of the four-momentum vector, and of course the sum of two null (lightlike) four-vectors have a positive magnitude. This explains the 'heavier' box, but the mass of the box will increase by $E/c^2$, E total energy of the photons, only in the frame in which the momenta of the photons balance each other. – Stan Liou Aug 9 '11 at 6:09 ## protected by dmckee♦Mar 20 at 19:15 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465888738632202, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/6889/what-is-the-difference-between-a-zeta-function-and-an-l-function/112396
## What is the difference between a zeta function and an L-function? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I've been learning about Dedekind zeta functions and some basic L-functions in my introductory algebraic number theory class, and I've been wondering why some functions are called L-functions and others are called zeta functions. I know that the zeta function is a product of L-functions, so it seems like an L-function is somehow a component of a zeta function (at least in the case of Artin L-functions, they correspond to specific representations). Is this the idea behind the distinction between "zeta function" and "L-function"? How do things generalize to other kinds of zeta- and L-functions? - Currently, I feel like there is no consensus among the answers, with Hansen's and Buzzard's answers reflecting one view, and engelbrekt and Will Sawin's reflecting another. – David Corwin Nov 14 at 18:08 It is likely that "all are true", I think... as in my add-on answer below. – paul garrett Nov 14 at 19:00 There is a reason that Prof. Nick Katz's course this semester is on equidistribution of L-functions over finite fields at not equidistribution of zeta functions over finite fields. – Will Sawin Nov 14 at 19:44 2 Dear Davidac, As Paul Garrett says, "all are true". On the one hand, $\zeta$-functions are traditionally associated to the trivial Galois representation, or an entire scheme of finite type over $\mathbb Z$ (or its cohomology with trivial coefficients, if one wants to think sheaf-theoretically), while $L$-functions are what happens when you allow twists'' in what would otherwise be $\zeta$-functions (think of Dirichlet $L$-functions compared to the Riemann $\zeta$-function), such as non-trivial Galois reps., or non-trivial sheaves, etc. But there is no hard and fast rule. ... – Emerton Nov 15 at 3:04 1 ... and you shouldn't be very concerned about possible differences in usage. Regards, Matthew – Emerton Nov 15 at 3:10 ## 5 Answers Let me say first that a Dedekind zeta function is always a product of Artin L-functions. It is the structure of the Galois closure which is relevant here. Let me give a nice example which is indicative of the general case. Let $p(x) \in \mathbb{Z}[x]$ be an irreducible cubic, and let $\alpha$ be a root of $p$. Then $K=\mathbb{Q}(\alpha)$ has trivial automorphism group, and its Galois closure (say $L/\mathbb{Q}$) is an S3-extension. The group S3 has three irreducible representations: the trivial representation, the "sign representation" $\chi$ which is also one-dimensional, and an irreducible two-dimensional representation which we will call $\rho$. Then we have the relations $\zeta_K(s)=\zeta_{\mathbb{Q}}(s)L(s,\rho)$ and $\zeta_L(s)=\zeta_{\mathbb{Q}}(s)L(s,\chi)L(s,\rho)^2$. The proofs of these facts are part of the formalism of Artin L-functions. Generally, the distinction is really a matter of history. Certain objects were named zeta functions - Hasse-Weil, Dedekind - while Dirichlet chose the letter "L" for the functions he made out of characters. However, one feature is that "zeta" functions tend to have poles, and they often "factor" into L-functions. These vagaries are made more precise in various places, for example Iwaniec-Kowalski Ch. 5 and some survey articles on the "Selberg class" of Dirichlet series. - 1 It sounds like you are saying that every irreducible cubic has a splitting field which has Galois group $S_3$, which is false. Similarly, it is false that Q(\alpha) has trivial automorphism group. Or maybe I'm misreading? – Cam McLeman Jun 9 2010 at 16:52 @Cam, you are right, I should have said "irreducible cubic with $S3$ splitting field". – David Hansen Jun 9 2010 at 17:03 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Although no-one else seems to have suggested this, my personal take on this is that there's no difference whatsoever. Sure, if people start talking about "Dedekind zeta functions" or "Artin $L$-functions" then there starts to be relations amongst these specific choices. But for me, all of these are special cases of automorphic $L$-functions, which, in a parallel universe, could just have easily have been called automorphic zeta-functions. History tells us that varieties have zeta functions, Dirichlet characters have $L$-functions, number fields have zeta functions, elliptic curves have $L$-functions and so on. But they're all just instances of the same thing really (at least conjecturally)---they're all just (simple combinations of) automorphic $L$-functions, which, as I say, could easily have been called automorphic zeta-functions. - I feel like this is a good strategy when speaking of an ideal world in which Langlands Correspondence is known or assumed. In more formal contexts, where we only assume the current state of knowledge, it might be useful to make a distinction. – S. Carnahan♦ Nov 28 2009 at 0:41 4 The point of my post is that I cannot see any distinction between the two names and it seems to me that whenever we have chosen one name over the other it is just a random historical coincidence. I don't understand your comment. Are you saying that, before it was known that all elliptic curves were modular, there was a clear argument for calling the L-function of an elliptic curve an L-function rather than a zeta function?? Bewildered, – Kevin Buzzard Nov 28 2009 at 7:52 1 I think that historically zeta functions were generating functions of sequences of counting numbers, and that to a large extent the phrase still retains that meaning outside number theory. – engelbrekt Nov 29 2009 at 23:39 1 Kevin, can you think of a counterexample to Will Sawin's stipulation? – David Corwin Nov 14 at 18:09 Sorry, I'm not sure what I was thinking when I wrote that comment. – S. Carnahan♦ Nov 15 at 4:09 L-functions depend on characters (or representations), zeta functions do not (or correspond to a trivial character). For example, $L(s,\chi) = \sum_{n = 1}^{\infty}\chi(n)n^{-s}$ where $\chi$ is a Dirichlet character. Supposing that $\chi_0$ is the trivial character modulo $q$, we get $L(s,\chi_0) = \zeta(s)\prod_{p|q}(1 - p^{-s})$ where $\zeta(s)$ is the Riemann zeta function. So the Dirichlet L-functions generalize the Riemann zeta function. The Dedekind zeta function also generalizes in the same way, to L-functions with Hecke Grössencharacters. (Removed a false statement at the end; as David Hansen points out, one can get a factorization into Artin L-functions (belonging to the Galois closure of the extension) even when the extension is not Galois by factorizing the Dedekind zeta function of the Galois closure and taking away some factors) - Is the latter true if the number field is Galois? I at least know it's true for cyclic extensions. – David Corwin Nov 26 2009 at 18:09 Yes, it is true if the extension is Galois. – engelbrekt Nov 26 2009 at 23:01 I downvoted this due to the falsity of the second-to-last sentence. If edited I will upvote again, and edit my post below accordingly. – David Hansen Nov 27 2009 at 4:52 In light of David Hansen's remarks, a more precise version of my answer above is that if the extension is Galois, the Dedekind zeta function factorizes into Artin L-functions belonging to that extension. – engelbrekt Nov 27 2009 at 6:07 I do like the other answers, too, but it seemed silly to append comments to all... : To my perception, first, I tend to not feel a difference between "zeta function attached to ..." and "L-function attached to..." if only because usage is variable. Second, in many settings (analytic/automorphic or geometric/motivic or...) a zeta function is an L-function with relatively trivial "further data", whatever that means in context. So, Dedekind zeta functions are Hecke L-functions with trivial data, for example. Analogously for schemes without or with non-trivial sheaf, in the other world. This general rule is certainly not strict... depending on usage. Third, there are the systematic, partly proven, partly conjectural, miracles that "larger" zetas factor into "smaller" L-functions. Classfield theory and such. This does highlight the ambiguity in "usage", namely, that some "base" is necessary to understand "triviality", etc. Edit: again significantly contingent on "usage"... If we say that an "L-function" (or "zeta function") "has an analytic continuation (provable by us)", then this would accidentally disallow Hasse-Weil zeta/L-functions of general varieties/schemes/whatever, because in this year we know "few" cases wherein we can prove this, although conjecturally it is mostly-always so (meaning that poles, if any, are finite and describable). Similarly, factorization into Artin L-functions is in one way completely fine (for decades), but, in another, unsatisfactory since we do not know their holomorphy, ... so might decide that they're not yet (in 2012) fully-legitimate "L-functions"? And/or that the "factorization" of Dedekind zetas into such things is not entirely satisfactory (as in a comment). I would not be surprised that such "technicalities" persist in things I know less about. :) - What conjectures are you referring to? My understanding was that all observed factorization can be explained by decomposition of Galois representations into irreducible Galois representations. Are you referring to conjectural factors smaller than that? Or something else? – Will Sawin Nov 14 at 19:37 Dear Will, For example, it is not known in general that Artin L-functions are holomorphic (rather than just merely meromorphic) in the $s$-plane. If they have poles, then the "factorization" into Artin $L$-functions is not such a good factorization. Regards, Matthew – Emerton Nov 14 at 21:04 Zeta functions arise from schemes. $L$-functions arise from schemes + a sheaf on that scheme. Obviously, the zeta function of a number field $K$ is the zeta function of $\operatorname{Spec} K$. The $L$-function of an elliptic curve is the $L$-function of its Tate module. Its zeta function is $\frac{\zeta(s)\zeta(s-1)}{L(s,E)}$, where $\zeta(s)$ is the Riemann zeta function. The factorization of a zeta function into $L$-functions is the factorization of the etale cohomology into irreducible Galois representations. The poles arise from the factors that are Tate twists of trivial representations, in particular from $H^0$ and $H^{2d}$. - I don't understand the 2nd sentence of the 3rd paragraph. – temp Nov 14 at 19:07 Dear Will, In the elliptic curve case, you seem to have written down the zeta-function of $E$ (as a scheme over $\mathbb Z$). Regards, Matthew – Emerton Nov 14 at 19:26 @Emerton: I forgot a few words. Thanks! @temp: I am boldly trying to subsume every other answer into mine. The "poles" comment is from David Hansen's answer. Although I guess you actually have more sources of poles. You can see that the zeta function of an elliptic curve has a pole arising from each $\zeta$ term, which themselves arise from the etale cohomology groups $H^0$ and $H^2$, and a bunch of poles from the inverted $L$ function term. – Will Sawin Nov 14 at 19:31 Oh, thanks for the edit. From what I understand, L-functions are from representations (of the Galois group or so) and zeta functions are ... err, I don't know where they come from, but when they appear they have their own flavor. – temp Nov 14 at 19:40 Zeta functions come from all the Galois representations that come from the etale cohomology of a single scheme with coefficients in the constant sheaf, put together. – Will Sawin Nov 15 at 2:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351232647895813, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/28245/monomial-distribution-of-xa-cdot-yb/28307
# Monomial distribution of $X^a \cdot Y^b$ What is the distribution of the following monomial? $$X^a \cdot Y^b$$ where $X$ and $Y$ are normal random variables and $a$ and $b$ are natural numbers. For example, when $X \sim N(0,1)$, $a=2$, and $b=0$ it is a Chi-squared distribution, which has a variance of 2. What if we have $n$ independent variables $X_1, X_2, \dots , X_n$, with $X_i \sim N(0,\sigma^2)$ and some natural numbers $p_1, p_2, \dots,p_n$. What can we say about the variance of the following r.v.? $$X_1^{p_1} \cdot X_2^{p_2} \cdots X_n^{p_n}$$ - 1 – user10525 May 11 '12 at 8:28 ## 1 Answer The first question about a distribution has no convenient general answer, because AFAIK nobody has assigned names to such distributions nor extensively studied and characterized them except when both $a$ and $b$ are $2$ or less. Concerning the second question about the variances, as shorthand write $\mathbf{p}=(p_1,p_2,\ldots,p_n)$ and $\mathbf{x^p} = X_1^{p_1} X_2^{p_2} \cdots X_n^{p_n}$. Recall that for the standard Normal distribution the $k^\text{th}$ moment is $0$ when $k$ is odd and otherwise equals $(k-1)!! = (k-1)(k-3)\cdots(3)(1)$. From this and the independence assumption, the expression for the expectation of $\mathbf{x^p}$ is immediate: it equals $0$ when one or more of the $p_i$ is odd and otherwise is the product of the $(p_i-1)!!$, which I will similarly abbreviate $(\mathbf{p-1})!!$. By definition, $$\text{Var}(\mathbf{x^p}) = \mathbb{E}[(\mathbf{x^p})^2] - \mathbb{E}[\mathbf{x^p}]^2 = (2\mathbf{p}\mathbf{-1})!! - ((\mathbf{p-1})!!)^2.$$ When the variables are scaled to have variance $\sigma^2$, $\mathbf{x^p}$ will be multiplied by $|\sigma|^{p_1+p_2+\cdots+p_n}$, whence its variance will be multiplied by $\sigma^{2(p_1+p_2+\cdots+p_n)}$. ### Example Let $\mathbf{p} = (2,4)$: • $(2\mathbf{p}\mathbf{-1})!! = 3!! 7!! = [3(1)][7(5)(3)(1)] = 315$; • $((\mathbf{p-1})!!)^2 = 1!!3!! = ([1][3(1)])^2 = 9$; • $\text{Var}(X_1^2 X_2^4) = (315 - 9)\sigma^{2(2+4)} = 306\sigma^{12}.$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9223197102546692, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/50099/list
Return to Answer 1 [made Community Wiki] This may be a little tangential to the original question, but it avoids Choice, and 'works with all choices at once'. In Makkai's theory of anafunctors, one recognizes that 'the' functor $C^J \to C$ giving a limit of a (small) diagram $J \to C$ is only really defined via universal properties, and so requires Choice. However, there is a unique anafunctor $C^J$ ⇸ $C$ - which is a span $C^J \leftarrow D \to C$ where the left-pointing leg is fully faithful and surjective on objects - expressing the limit. The category $D$ is defined to consist of limit cones and maps between them. The functor to $C^J$ forgets the vertex of the cone, and the functor to $C$ forgets the diagram and keeps the vertex. The universal properties take care of functoriality, and if one can choose a limit for each diagram, or there are canonical constructions of limits, then this can be converted into an ordinary functor. The cost of working with anafunctors rather than functors is that one gets a bona fide bicategory of categories, rather than a 2-category, but otherwise the whole theory of categories goes through.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113664627075195, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/50964/list
## Return to Answer 2 Corrected grammar, tried to clarify some language. A position Positions of the car are in 1-1 correspondence with the isometry isometries of the plane that takes take it from its initial parked position to its current a given position. Define a driving metric that is the minimum total number of turns of the drive shaft (~mean arclength of drive wheels) to get from one position to another. As every driver knows from parallel parking, the driving distance to move sideways by a small Euclidean distance $x$, annoyingly, does not go decrease to 0 linearly with Euclidean distancex: it is proportional to the square root of the Euclidean distance$\sqrt x$, because the front wheels need (approximately) to enclose an area proportional to the sideways distance. In the driving metric, the distance from $g*h*$ (to $h*g$,for example if $g =$ cramp the steering wheel to the left , then to the right, both while driving and drive forward 1 one unit) to while $h*g$ (h = $cramp to the left, then steering wheel to the right ) and drive forward 1 unit is sometimes as much as$2( d(1,g)+d(1,h))$: in general, there's no there may not be a more efficient way to get from the first position$g*h$to the second$h*g$than to undo retrace: first , then do the second.inverse of$g*h$, then do$h*g*\$. More generally, for any Lie group and any subspace $V$ of its Lie algebra (the tangent space at $1$) that generates the Lie algebra, there is a Carnot-Caratheodory metric that measures path lengths with along the left-invariant plane field which agrees with $V$, with respect to a left-invariant Riemannian metric defined on this plane field. These metrics are important in many real situations. Any non-abelian Lie group has many such subspaces.Just as in driving, where cars follow paths that have bounded curvature, one can further restrict to paths whose tangent vectors are in the left-invariant cone field that extends any cone in the Lie algebra that generates the Lie algebra, although the Lipschitz equivalence class of the resulting metric depends only on the linear span of the cone (assuming a finite dimensional Lie group). 1 Yes, it's an easy and well-known fact that for a Lie group with a smooth Riemannian metric (which we may assume left-invariant) $$d([g,h], 1) = d(g*h, h*g) = O(d(1,g)*d(1,h)) .$$ This follows from differentiability, and from the observation that the commutator map $[,]: G \times G \rightarrow G$ maps the submanifolds $1 \times G$ and $G \times 1$ both to $1$, so the first derivative of the commutator is 0. The second derivative is given by the Lie bracket. However, any Lie group whose identity component is not abelian admits left-invariant path-metrics where this inequality fails, in particular, its Carnot-Caratheodory metrics. Consider, for instance, the group of isometries of the plane from the point of view of driving a car in a flat area. A position of the car are in 1-1 correspondence with the isometry of the plane that takes it from its initial parked position to its current position. Define a metric that is the minimum total number of turns of the drive shaft to get from one position to another. As every driver knows from parallel parking, the driving distance to move sideways by a small Euclidean does not go to 0 linearly with Euclidean distance: it is proportional to the square root of the Euclidean distance, because the front wheels need (approximately) to enclose an area proportional to the sideways distance. In the driving metric, the distance from $g*h*$ (cramp the steering wheel to the left, then to the right, both while driving forward 1 unit) to $h*g$ (cramp to the left, then to the right) is $2( d(1,g)+d(1,h))$: in general, there's no more efficient way to get from the first position to the second than to undo first, then do the second. More generally, for any Lie group and any subspace $V$ of its Lie algebra (the tangent space at $1$) that generates the Lie algebra, there is a Carnot-Caratheodory metric that measures path lengths with along the left-invariant plane field which agrees with $V$, with respect to a left-invariant Riemannian metric defined on this plane field. These metrics are important in many real situations. The question has at least as much to do with the metric as with the group.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380442500114441, "perplexity_flag": "head"}
http://mathoverflow.net/questions/46502/on-the-number-of-archimedean-solids/61477
## On the number of Archimedean solids ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Does anyone know of any good resources for the proof of the number of Archimedean solids (also known as semiregular polyhedra)? I have seen a couple of algebraic discussions but no true proof. Also, I am looking more at trying to prove it topologically, but for now, any resource will help.* *I worked on this project a bit as an undergraduate and am just now getting back into it. - 1 I don't think 'proof assistant' means what you think it means. – HW Nov 18 2010 at 17:32 18 Having just spent an hour lecturing about the $p$-adic numbers, this question instantly makes me wonder about the classification of non-Archimedean solids :-/ – Kevin Buzzard Nov 18 2010 at 18:00 ## 5 Answers A proof of the enumeration theorem for the Archimedean solids (which basically dates back to Kepler) can be found in the beautiful book "Polyhedra" by P.R. Cromwell (Cambridge University Press 1997, pp. 162-167). - This is indeed a very good proof resource. I just glanced briefly at it, but will explore it further when I get a chance. I really appreciate your suggestion - it is actually the closest to the algorithm I am using than any I have found so far. Thanks again. – Tyler Clark Nov 18 2010 at 20:42 I'm glad you found it helpful. – Andrey Rekalo Nov 18 2010 at 21:05 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Incidentally, you may be interested in the article by Joseph Malkevitch, "Milestones in the history of polyhedra," which appeared in Shaping Space: A Polyhedral Approach, Marjorie Senechal and George Fleck, editors, pages 80-92. Birkhauser, Boston, 1988. There he makes the case (following Grünbaum) that there should be 14 Archimedean solids rather than 13, including the pseudorhombicuboctahedron(!) as the 14th. - I will definitely try to find this article and read through it. Thanks for your feedback! – Tyler Clark Nov 18 2010 at 20:42 Following up on Joseph's comment: Branko Grünbaum and others have pointed out that besides the 13 or 14, there are also two infinite families of polyhedra meeting the definition of Archimedean, although generally not considered to be Archimedean. Why prisms and antiprisms are excluded from the list has never been clear to me. In any case, this is not just a historical curiosity --- in any attempt you make to classify them, you should run into these two infinite families. If you use a modern definition, i.e. vertex-transitive, then you will also get 13 others. And a little group theory can help in the classification. If you use a more classical definition, i.e. "locally vertex-regular," you will indeed find a 14th. - You are indeed correct. The algorithm I worked on with another classmate did in fact give us the prisms and antiprisms. It has been a while since I have worked on this and I am just now getting back into it. I cannot remember why we excluded the prisms and antiprisms - I will have to take a closer look at that issue. – Tyler Clark Nov 18 2010 at 20:40 I use a slightly different approach than Cromwell. Please see the Exercises at the end of Chapter 5 here: http://staff.imsa.edu/~vmatsko/pgsCh1-5.pdf. This is a draft of a textbook I am writing, and currently using to teach a course on polyhedra. The level of the text is mid-level undergraduate, so strictly speaking, the Exercises are really an outline of a rigorous enumeration. Symmetry considerations are glossed over. - My proof can be found here: ywhmaths.webs.com/Geometry/ArchimedeanSolids.pdf -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418805241584778, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=Law_of_cosines&diff=17470&oldid=17466
# Law of cosines ### From Math Images (Difference between revisions) | | | | | |-----------|-----------------------------------------|-----------|-----------------------------------------| | | | () | | | Line 42: | | Line 42: | | | | | | | | | ===Solution=== | | ===Solution=== | | - | | + | {{hide|1= | | | To find the side length <math>c</math>, | | To find the side length <math>c</math>, | | | | | | | Line 103: | | Line 103: | | | | | | | | | [[Image:SAS_solution.jpg]] | | [[Image:SAS_solution.jpg]] | | | | + | }} | ## Revision as of 11:36, 30 May 2011 The law of cosines is a formula that helps in triangulation when two or three side lengths of a triangle are known. The formula relates all three side lengths of a triangle to the cosine of a particular angle. $c^{2} = a^{2} + b^{2} - 2ab \cos C$ When to use it: SAS, SSS. ## Proof Let $\vartriangle ABC$ be oriented so that $C$ is at the origin, and $B$ is at the point$(a,0)$. ### Distance Formula $distance = \sqrt {(x_{2}-x_{1})^{2} + (y_{2}-y_{1})^{2}}$ $c$ is the distance from $A$ to $B$. Substituting the appropriate points into the distance formula gives us $c = \sqrt {(a-b \cos C)^{2} + (0-b \sin C)^{2}}$ Squaring the inner terms, we have $c = \sqrt {(a^{2}-2ab \cos C+b^{2} \cos^{2} C) + (b^{2} \sin^{2} C)}$ Since $\cos^{2} C + \sin^{2} C = 1$, $c = \sqrt {(a^{2}+b^{2}-2ab \cos C+b^{2}}$ Square both sides for $c^{2} = (a^{2}+b^{2}-2ab \cos C+b^{2}$ ## Example Triangulation Complete the triangle using the law of cosines. $c^{2} = a^{2} + b^{2} - 2ab \cos C$ ### Solution To find the side length $c$, $c^{2} = 6^{2} + (6 \sqrt{2})^{2} -2 (6) (6 \sqrt{2}) \cos 45^\circ$ Simplify for $c^{2} =36 + 36 (2) - 72 \sqrt{2}) \cos 45^\circ$ Since $\cos 45^\circ = \frac{1}{\sqrt{2}}$, substitution gives us $c^{2} =36 + 36 (2) - 72 \sqrt{2} (\frac{1}{\sqrt{2}})$ Simplify for $c^{2} =36 + 72 - 72$ $c^{2} =36$ Taking the square root of both sides gives us $c =6$ Now we can orient the triangle differently to get get a new version of the law of cosines so we can find angle measure $B$, $b^{2} = a^{2} + c^{2} - 2ab \cos B$ Substituting in the appropriate side lengths gives us $(6 \sqrt{2})^{2} = 6^{2} + 6^{2} - 2(6)(6) \cos B$ Simplify for $36 (2) = 36 + 36 - 72 \cos B$ $72 = 72 - 72 \cos B$ Subtracting $72$ from both sides gives us $0 = - 72 \cos B$ Dividing both sides by $-72$ gives us $0 = \cos B$ Using inverse trig, we know that $B = 90^\circ$ And we can find the last angle measure $A$ by subtracting the other two measures from $180^\circ$ $180^\circ - 90^\circ - 45^\circ = 45^\circ$ $A=45^\circ$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 37, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8185712099075317, "perplexity_flag": "middle"}
http://psychology.wikia.com/wiki/Autocorrelation
# Autocorrelation Talk0 31,725pages on this wiki Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory Autocorrelation is a mathematical tool used frequently in signal processing for analysing functions or series of values, such as time domain signals. Informally, it is a measure of how well a signal matches a time-shifted version of itself, as a function of the amount of time shift. More precisely, it is the cross-correlation of a signal with itself. Autocorrelation is useful for finding repeating patterns in a signal, such as determining the presence of a periodic signal which has been buried under noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. ## Definitions Different definitions of autocorrelation are in use depending on the field of study which is being considered and not all of them are equivalent. In some fields, the term is used interchangeably with autocovariance. ### Statistics In statistics, the autocorrelation function (ACF) of a random process describes the correlation between the process at different points in time. Let Xt be the value of the process at time t (where t may be an integer for a discrete-time process or a real number for a continuous-time process). If Xt has mean μ and variance σ2 then the definition of the ACF is $R(t,s) = \frac{E[(X_t - \mu)(X_s - \mu)]}{\sigma^2}\, ,$ where E is the expected value operator. Note that this expression is not well-defined for all time series or processes, since the variance σ2 may be zero (for a constant process) or infinite. If the function R is well-defined its value must lie in the range [−1, 1], with 1 indicating perfect correlation and −1 indicating perfect anti-correlation. If Xt is second-order stationary then the ACF depends only on the difference between t and s and can be expressed as a function of a single variable. This gives the more familiar form $R(k) = \frac{E[(X_i - \mu)(X_{i+k} - \mu)]}{\sigma^2}\, ,$ where k is the lag, | t − s |. It is common practice in many disciplines to drop the normalization by σ2 and use the term autocorrelation interchangeably with autocovariance. For a discrete time series of length n {X1, X2, … Xn} with known mean and variance, an estimate of the autocorrelation may be obtained as $\hat{R}(k)=\frac{1}{(n-k) \sigma^2} \sum_{t=1}^{n-k} [X_t-\mu][X_{t+k}-\mu]$ for any positive integer k < n. If the true mean and variance of the process are not known then μ and σ2 may be replaced by the standard formulae for sample mean and sample variance, although this leads to a biased estimator.[1] ### Signal processing In signal processing, the above definition is often used without the normalization, that is, without subtracting the mean and dividing by the variance. When the autocorrelation function is normalized by mean and variance, it is sometimes referred to as the autocorrelation coefficient.[2] Given a signal f(t), the continuous autocorrelation Rff(τ) is most often defined as the continuous cross-correlation integral of f(t) with itself, at lag τ. $R_{ff}(\tau) = \overline{f}(-\tau) * f(\tau) = \int_{-\infty}^{\infty} f(t+\tau)\overline{f}(t)\, dt = \int_{-\infty}^{\infty} f(t)\overline{f}(t-\tau)\, dt$ where $\bar f$ represents the complex conjugate and $*$ represents convolution. For a real function, $\bar f = f$. The discrete autocorrelation R at lag j for a discrete signal xn is $R_{xx}(j) = \sum_n x_n \overline{x}_{n-j} \ .$ The above definitions work for signals that are square integrable, or square summable, that is, of finite energy. Signals that "last forever" are treated instead as random processes, in which case different definitions are needed, based on expected values. For wide-sense-stationary random processes, the autocorrelations are defined as $R_{ff}(\tau) = E\left[f(t)\overline{f}(t-\tau)\right]$ $R_{xx}(j) = E\left[x_n \overline{x}_{n-j}\right]$ For processes that are not stationary, these will also be functions of t, or n. For processes that are also ergodic, the expectation can be replaced by the limit of a time average. The autocorrelation of an ergodic process is sometimes defined as or equated to[2] $R_{ff}(\tau) = \lim_{T \rightarrow \infty} {1 \over T} \int_{0}^{T} f(t+\tau)\overline{f}(t)\, dt$ $R_{xx}(j) = \lim_{N \rightarrow \infty} {1 \over N} \sum_{n=0}^{N-1}x_n \overline{x}_{n-j}$ These definitions have the advantage that they give sensible well-defined single-parameter results for periodic functions, even when those functions are not the output of stationary ergodic processes. Alternatively, signals that last forever can be treated by a short-time autocorrelation function analysis, using finite time integrals. (See short-time Fourier transform for a related process.) Multi-dimensional autocorrelation is defined similarly. For example, in three dimensions the autocorrelation of a square-summable discrete signal would be $R(j,k,\ell) = \sum_{n,q,r} (x_{n,q,r})(x_{n-j,q-k,r-\ell}).$ When mean values are subtracted from signals before computing an autocorrelation function, the resulting function is usually called an auto-covariance function. ## Properties In the following, we will describe properties of one-dimensional autocorrelations only, since most properties are easily transferred from the one-dimensional case to the multi-dimensional cases. • A fundamental property of the autocorrelation is symmetry, R(i) = R(−i), which is easy to prove from the definition. In the continuous case, the autocorrelation is an even function $R_f(-\tau) = R_f(\tau)\,$ when f is a real function and the autocorrelation is a Hermitian function $R_f(-\tau) = R_f^*(\tau)\,$ when f is a complex function. • The continuous autocorrelation function reaches its peak at the origin, where it takes a real value, i.e. for any delay τ, $|R_f(\tau)| \leq R_f(0)$. This is a consequence of the Cauchy–Schwarz inequality. The same result holds in the discrete case. • The autocorrelation of a periodic function is, itself, periodic with the very same period. • The autocorrelation of the sum of two completely uncorrelated functions (the cross-correlation is zero for all τ) is the sum of the autocorrelations of each function separately. • Since autocorrelation is a specific type of cross-correlation, it maintains all the properties of cross-correlation. • The autocorrelation of a white noise signal will have a strong peak (represented by a Dirac delta function) at τ = 0 and will be absolutely 0 for all other τ. This shows that a sampled instance of a white noise signal is not statistically correlated to a sample instance of the same white noise signal at another time. • The Wiener–Khinchin theorem relates the autocorrelation function to the power spectral density via the Fourier transform: $R(\tau) = \int_{-\infty}^\infty S(f) e^{j 2 \pi f \tau} \, df$ $S(f) = \int_{-\infty}^\infty R(\tau) e^{- j 2 \pi f \tau} \, d\tau.$ • For real-valued functions, the symmetric autocorrelation function has a real symmetric transform, so the Wiener–Khinchin theorem can be re-expressed in terms of real cosines only: $R(\tau) = \int_{-\infty}^\infty S(f) \cos(2 \pi f \tau) \, df$ $S(f) = \int_{-\infty}^\infty R(\tau) \cos(2 \pi f \tau) \, d\tau.$ ## Autocorrelation in regression analysis In regression analysis using time series data, autocorrelation of the residuals ("error terms", in econometrics) is a problem, and leads to an upward bias in estimates of the statistical significance of coefficient estimates, such as the t statistic. The traditional test for the presence of first-order autocorrelation is the Durbin-Watson statistic or, if the explanatory variables include a lagged dependent variable, Durbin's h statistic. A more flexible test, covering autocorrelation of higher orders and applicable whether or not the regressors include lags of the dependent variable, is the Breusch-Godfrey test. This involves an auxiliary regression, wherein the residuals obtained from estimating the model of interest are regressed on (a) the original regressors and (b) k lags of the residuals, where k is the order of the test. The simplest version of the test statistic from this auxiliary regression is TR2, where T is the sample size and R2 is the coefficient of determination. Under the null hypothesis of no autocorrelation, this statistic is asymptotically distributed as Χ2 with k degrees of freedom. Responses to autocorrelation include differencing of the data and the use of lag structures in estimation. ## Applications • One application of autocorrelation is the measurement of optical spectra and the measurement of very-short-duration light pulses produced by lasers, both using optical autocorrelators. • In optics, normalized autocorrelations and cross-correlations give the degree of coherence of an electromagnetic field. • In signal processing, autocorrelation can give information about repeating events like musical beats or pulsar frequencies, though it cannot tell the position in time of the beat. # Photos Add a Photo 6,465photos on this wiki • by Dr9855 2013-05-14T02:10:22Z • by PARANOiA 12 2013-05-11T19:25:04Z Posted in more... • by Addyrocker 2013-04-04T18:59:14Z • by Psymba 2013-03-24T20:27:47Z Posted in Mike Abrams • by Omaspiter 2013-03-14T09:55:55Z • by Omaspiter 2013-03-14T09:28:22Z • by Bigkellyna 2013-03-14T04:00:48Z Posted in User talk:Bigkellyna • by Preggo 2013-02-15T05:10:37Z • by Preggo 2013-02-15T05:10:17Z • by Preggo 2013-02-15T05:09:48Z • by Preggo 2013-02-15T05:09:35Z • See all photos See all photos >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8762744665145874, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2012/09/12/lie-algebra-modules/?like=1&source=post_flair&_wpnonce=d0199d491e
# The Unapologetic Mathematician ## Lie Algebra Modules It should be little surprise that we’re interested in concrete actions of Lie algebras on vector spaces, like we were for groups. Given a Lie algebra $L$ we define an $L$-module to be a vector space $V$ equipped with a bilinear function $L\times V\to V$ — often written $(x,v)\mapsto x\cdot v$ satisfying the relation $\displaystyle [x,y]\cdot v=x\cdot(y\cdot v)-y\cdot(x\cdot v)$ Of course, this is the same thing as a representation $\phi:L\to\mathfrak{gl}(V)$. Indeed, given a representation $\phi$ we can define $x\cdot v=[\phi(x)](v)$; given an action we can define a representation $\phi(x)\in\mathfrak{gl}(V)$ by $[\phi(x)](v)=x\cdot v$. The above relation is exactly the statement that the bracket in $L$ corresponds to the bracket in $\mathfrak{gl}(V)$. Of course, the modules of a Lie algebra form a category. A homomorphism of $L$-modules is a linear map $\phi:V\to W$ satisfying $\displaystyle\phi(x\cdot v)=x\cdot\phi(v)$ We automatically get the concept of a submodule — a subspace sent back into itself by each $x\in L$ — and a quotient module. In the latter case, we can see that if $W\subseteq V$ is any submodule then we can define $x\cdot(v+W)=(x\cdot v)+W$. This is well-defined, since if $v+w$ is any other representative of $v+W$ then $x\cdot(v+w)=x\cdot v+x\cdot w$, and $x\cdot w\in W$, so $x\cdot v$ and $x\cdot(v+w)$ both represent the same element of $v+W$. Thus, every submodule can be seen as the kernel of some homomorphism: the projection $V\to V/W$. It should be clear that every homomorphism has a kernel, and a cokernel can be defined simply as the quotient of the range by the image. All we need to see that the category of $L$-modules is abelian is to show that every epimorphism is actually a quotient, but we know this is already true for the underlying vector spaces. Since the (vector space) kernel of an $L$-module map is an $L$-submodule, this is also true for $L$-modules. ## 2 Comments » 1. [...] all , , and . Bilinearity should be clear, so we just check the defining property of a module. That is, we take two Lie algebra elements and [...] Pingback by | September 17, 2012 | Reply 2. [...] is a module of a Lie algebra , there is one submodule that turns out to be rather interesting: the submodule [...] Pingback by | September 21, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8941384553909302, "perplexity_flag": "head"}
http://aimath.org/textbooks/beezer/LTsection.html
Linear Transformations Early in Chapter VS:Vector Spaces we prefaced the definition of a vector space with the comment that it was "one of the two most important definitions in the entire course." Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go. ## Linear Transformations Definition LT (Linear Transformation) A linear transformation, $\ltdefn{T}{U}{V}$, is a function that carries elements of the vector space $U$ (called the domain) to the vector space $V$ (called the codomain), and which has two additional properties 1. $\lt{T}{\vect{u}_1+\vect{u}_2}=\lt{T}{\vect{u}_1}+\lt{T}{\vect{u}_2}$ for all $\vect{u}_1,\,\vect{u}_2\in U$ 2. $\lt{T}{\alpha\vect{u}}=\alpha\lt{T}{\vect{u}}$ for all $\vect{u}\in U$ and all $\alpha\in\complex{\null}$ The two defining conditions in the definition of a linear transformation should "feel linear," whatever that means. Conversely, these two conditions could be taken as exactly what it means to be linear. As every vector space property derives from vector addition and scalar multiplication, so too, every property of a linear transformation derives from these two defining properties. While these conditions may be reminiscent of how we test subspaces, they really are quite different, so do not confuse the two. Here are two diagrams that convey the essence of the two defining properties of a linear transformation. In each case, begin in the upper left-hand corner, and follow the arrows around the rectangle to the lower-right hand corner, taking two different routes and doing the indicated operations labeled on the arrows. There are two results there. For a linear transformation these two expressions are always equal. Definition of Linear Transformation, Additive Definition of Linear Transformation, Multiplicative A couple of words about notation. $T$ is the name of the linear transformation, and should be used when we want to discuss the function as a whole. $\lt{T}{\vect{u}}$ is how we talk about the output of the function, it is a vector in the vector space $V$. When we write $\lt{T}{\vect{x}+\vect{y}}=\lt{T}{\vect{x}}+\lt{T}{\vect{y}}$, the plus sign on the left is the operation of vector addition in the vector space $U$, since $\vect{x}$ and $\vect{y}$ are elements of $U$. The plus sign on the right is the operation of vector addition in the vector space $V$, since $\lt{T}{\vect{x}}$ and $\lt{T}{\vect{y}}$ are elements of the vector space $V$. These two instances of vector addition might be wildly different. Let's examine several examples and begin to form a catalog of known linear transformations to work with. Example ALT: A linear transformation. It can be just as instructive to look at functions that are not linear transformations. Since the defining conditions must be true for all vectors and scalars, it is enough to find just one situation where the properties fail. Example NLT: Not a linear transformation. Example LTPM: Linear transformation, polynomials to matrices. Example LTPP: Linear transformation, polynomials to polynomials. Linear transformations have many amazing properties, which we will investigate through the next few sections. However, as a taste of things to come, here is a theorem we can prove now and put to use immediately. Theorem LTTZZ (Linear Transformations Take Zero to Zero) Suppose $\ltdefn{T}{U}{V}$ is a linear transformation. Then $\lt{T}{\zerovector}=\zerovector$. Proof. Return to Example NLT and compute $\lt{S}{\colvector{0\\0\\0}}=\colvector{0\\0\\-2}$ to quickly see again that $S$ is not a linear transformation, while in Example LTPM compute $\lt{S}{0+0x+0x^2+0x^3}=\begin{bmatrix}0&0\\0&0\end{bmatrix}$ as an example of Theorem LTTZZ at work. ## Linear Transformation Cartoons Throughout this chapter, and Chapter R:Representations, we will include drawings of linear transformations. We will call them "cartoons," not because they are humorous, but because they will only expose a portion of the truth. A Bugs Bunny cartoon might give us some insights on human nature, but the rules of physics and biology are routinely (and grossly) violated. So it will be with our linear transformation cartoons. Here is our first, followed by a guide to help you understand how these are meant to describe fundamental truths about linear transformations, while simultaneously violating other truths. General Linear Transformation Here we picture a linear transformation $\ltdefn{T}{U}{V}$, where this information will be consistently displayed along the bottom edge. The ovals are meant to represent the vector spaces, in this case $U$, the domain, on the left and $V$, the codomain, on the right. Of course, vector spaces are typically infinite sets, so you'll have to imagine that characteristic of these sets. A small dot inside of an oval will represent a vector within that vector space, sometimes with a name, sometimes not (in this case every vector has a name). The sizes of the ovals are meant to be proportional to the dimensions of the vector spaces. However, when we make no assumptions about the dimensions, we will draw the ovals as the same size, as we have done here (which is not meant to suggest that the dimensions have to be equal). To convey that the linear transformation associates a certain input with a certain output, we will draw an arrow from the input to the output. So, for example, in this cartoon we suggest that $\lt{T}{\vect{x}}=\vect{y}$. Nothing in the definition of a linear transformation prevents two different inputs being sent to the same output and we see this in $\lt{T}{\vect{u}}=\vect{v}=\lt{T}{\vect{w}}$. Similarly, an output may not have any input being sent its way, as illustrated by no arrow pointing at $\vect{t}$. In this cartoon, we have captured the essence of our one general theorem about linear transformations, Theorem LTTZZ, $\lt{T}{\zerovector_U}=\zerovector_V$. On occasion we might include this basic fact when it is relevant, at other times maybe not. Note that the definition of a linear transformation requires that it be a function, so every element of the domain should be associated with some element of the codomain. This will be reflected by never having an element of the domain without an arrow originating there. These cartoons are of course no substitute for careful definitions and proofs, but they can be a handy way to think about the various properties we will be studying. ## Matrices and Linear Transformations If you give me a matrix, then I can quickly build you a linear transformation. Always. First a motivating example and then the theorem. Example LTM: Linear transformation from a matrix. So the multiplication of a vector by a matrix "transforms" the input vector into an output vector, possibly of a different size, by performing a linear combination. And this transformation happens in a "linear" fashion. This "functional" view of the matrix-vector product is the most important shift you can make right now in how you think about linear algebra. Here's the theorem, whose proof is very nearly an exact copy of the verification in the last example. Theorem MBLT (Matrices Build Linear Transformations) Suppose that $A$ is an $m\times n$ matrix. Define a function $\ltdefn{T}{\complex{n}}{\complex{m}}$ by $\lt{T}{\vect{x}}=A\vect{x}$. Then $T$ is a linear transformation. Proof. So Theorem MBLT gives us a rapid way to construct linear transformations. Grab an $m\times n$ matrix $A$, define $\lt{T}{\vect{x}}=A\vect{x}$ and Theorem MBLT tells us that $T$ is a linear transformation from $\complex{n}$ to $\complex{m}$, without any further checking. We can turn Theorem MBLT around. You give me a linear transformation and I will give you a matrix. Example MFLT: Matrix from a linear transformation. Example MFLT was not accident. Consider any one of the archetypes where both the domain and codomain are sets of column vectors (Archetype M through Archetype R) and you should be able to mimic the previous example. Here's the theorem, which is notable since it is our first occasion to use the full power of the defining properties of a linear transformation when our hypothesis includes a linear transformation. Theorem MLTCV (Matrix of a Linear Transformation, Column Vectors) Suppose that $\ltdefn{T}{\complex{n}}{\complex{m}}$ is a linear transformation. Then there is an $m\times n$ matrix $A$ such that $\lt{T}{\vect{x}}=A\vect{x}$. Proof. So if we were to restrict our study of linear transformations to those where the domain and codomain are both vector spaces of column vectors (Definition VSCV), every matrix leads to a linear transformation of this type (Theorem MBLT), while every such linear transformation leads to a matrix (Theorem MLTCV). So matrices and linear transformations are fundamentally the same. We call the matrix $A$ of Theorem MLTCV the matrix representation of $T$. We have defined linear transformations for more general vector spaces than just $\complex{m}$, can we extend this correspondence between linear transformations and matrices to more general linear transformations (more general domains and codomains)? Yes, and this is the main theme of Chapter R:Representations. Stay tuned. For now, let's illustrate Theorem MLTCV with an example. Example MOLT: Matrix of a linear transformation. ## Linear Transformations and Linear Combinations It is the interaction between linear transformations and linear combinations that lies at the heart of many of the important theorems of linear algebra. The next theorem distills the essence of this. The proof is not deep, the result is hardly startling, but it will be referenced frequently. We have already passed by one occasion to employ it, in the proof of Theorem MLTCV. Paraphrasing, this theorem says that we can "push" linear transformations "down into" linear combinations, or "pull" linear transformations "up out" of linear combinations. We'll have opportunities to both push and pull. Theorem LTLC (Linear Transformations and Linear Combinations) Suppose that $\ltdefn{T}{U}{V}$ is a linear transformation, $\vectorlist{u}{t}$ are vectors from $U$ and $\scalarlist{a}{t}$ are scalars from $\complex{\null}$. Then \begin{equation*} \lt{T}{\lincombo{a}{u}{t}} = a_1\lt{T}{\vect{u}_1}+ a_2\lt{T}{\vect{u}_2}+ a_3\lt{T}{\vect{u}_3}+\cdots+ a_t\lt{T}{\vect{u}_t} \end{equation*} Proof. Some authors, especially in more advanced texts, take the conclusion of Theorem LTLC as the defining condition of a linear transformation. This has the appeal of being a single condition, rather than the two-part condition of Definition LT. (See exercise LT.T20). Our next theorem says, informally, that it is enough to know how a linear transformation behaves for inputs from any basis of the domain, and all the other outputs are described by a linear combination of these few values. Again, the statement of the theorem, and its proof, are not remarkable, but the insight that goes along with it is very fundamental. Theorem LTDB (Linear Transformation Defined on a Basis) Suppose $B=\set{\vectorlist{u}{n}}$ is a basis for the vector space $U$ and $\vectorlist{v}{n}$ is a list of vectors from the vector space $V$ (which are not necessarily distinct). Then there is a unique linear transformation, $\ltdefn{T}{U}{V}$, such that $\lt{T}{\vect{u}_i}=\vect{v}_i$, $1\leq i\leq n$. Proof. You might recall facts from analytic geometry, such as "any two points determine a line" and "any three non-collinear points determine a parabola." Theorem LTDB has much of the same feel. By specifying the $n$ outputs for inputs from a basis, an entire linear transformation is determined. The analogy is not perfect, but the style of these facts are not very dissimilar from Theorem LTDB. Notice that the statement of Theorem LTDB asserts the existence of a linear transformation with certain properties, while the proof shows us exactly how to define the desired linear transformation. The next examples how to work with linear transformations that we find this way. Example LTDB1: Linear transformation defined on a basis. Example LTDB2: Linear transformation defined on a basis. Here is a third example of a linear transformation defined by its action on a basis, only with more abstract vector spaces involved. Example LTDB3: Linear transformation defined on a basis. Informally, we can describe Theorem LTDB by saying "it is enough to know what a linear transformation does to a basis (of the domain)." ## Pre-Images The definition of a function requires that for each input in the domain there is exactly one output in the codomain. However, the correspondence does not have to behave the other way around. A member of the codomain might have many inputs from the domain that create it, or it may have none at all. To formalize our discussion of this aspect of linear transformations, we define the pre-image. Definition PI (Pre-Image) Suppose that $\ltdefn{T}{U}{V}$ is a linear transformation. For each $\vect{v}$, define the pre-image of $\vect{v}$ to be the subset of $U$ given by \begin{equation*} \preimage{T}{\vect{v}}=\setparts{\vect{u}\in U}{\lt{T}{\vect{u}}=\vect{v}} \end{equation*} In other words, $\preimage{T}{\vect{v}}$ is the set of all those vectors in the domain $U$ that get "sent" to the vector $\vect{v}$. Example SPIAS: Sample pre-images, Archetype S. The preimage is just a set, it is almost never a subspace of $U$ (you might think about just when $\preimage{T}{\vect{v}}$ is a subspace, see exercise ILT.T10). We will describe its properties going forward, and it will be central to the main ideas of this chapter. ## New Linear Transformations From Old We can combine linear transformations in natural ways to create new linear transformations. So we will define these combinations and then prove that the results really are still linear transformations. First the sum of two linear transformations. Definition LTA (Linear Transformation Addition) Suppose that $\ltdefn{T}{U}{V}$ and $\ltdefn{S}{U}{V}$ are two linear transformations with the same domain and codomain. Then their sum is the function $\ltdefn{T+S}{U}{V}$ whose outputs are defined by \begin{equation*} \lt{(T+S)}{\vect{u}}=\lt{T}{\vect{u}}+\lt{S}{\vect{u}} \end{equation*} Notice that the first plus sign in the definition is the operation being defined, while the second one is the vector addition in $V$. (Vector addition in $U$ will appear just now in the proof that $T+S$ is a linear transformation.) Definition LTA only provides a function. It would be nice to know that when the constituents ($T$, $S$) are linear transformations, then so too is $T+S$. Theorem SLTLT (Sum of Linear Transformations is a Linear Transformation) Suppose that $\ltdefn{T}{U}{V}$ and $\ltdefn{S}{U}{V}$ are two linear transformations with the same domain and codomain. Then $\ltdefn{T+S}{U}{V}$ is a linear transformation. Proof. Example STLT: Sum of two linear transformations. Definition LTSM (Linear Transformation Scalar Multiplication) Suppose that $\ltdefn{T}{U}{V}$ is a linear transformation and $\alpha\in\complex{\null}$. Then the scalar multiple is the function $\ltdefn{\alpha T}{U}{V}$ whose outputs are defined by \begin{equation*} \lt{(\alpha T)}{\vect{u}}=\alpha\lt{T}{\vect{u}} \end{equation*} Given that $T$ is a linear transformation, it would be nice to know that $\alpha T$ is also a linear transformation. Theorem MLTLT (Multiple of a Linear Transformation is a Linear Transformation) Suppose that $\ltdefn{T}{U}{V}$ is a linear transformation and $\alpha\in\complex{\null}$. Then $\ltdefn{(\alpha T)}{U}{V}$ is a linear transformation. Proof. Example SMLT: Scalar multiple of a linear transformation. Now, let's imagine we have two vector spaces, $U$ and $V$, and we collect every possible linear transformation from $U$ to $V$ into one big set, and call it $\vslt{U}{V}$. Definition LTA and Definition LTSM tell us how we can "add" and "scalar multiply" two elements of $\vslt{U}{V}$. Theorem SLTLT and Theorem MLTLT tell us that if we do these operations, then the resulting functions are linear transformations that are also in $\vslt{U}{V}$. Hmmmm, sounds like a vector space to me! A set of objects, an addition and a scalar multiplication. Why not? Theorem VSLT (Vector Space of Linear Transformations) Suppose that $U$ and $V$ are vector spaces. Then the set of all linear transformations from $U$ to $V$, $\vslt{U}{V}$ is a vector space when the operations are those given in Definition LTA and Definition LTSM. Proof. Definition LTC (Linear Transformation Composition) Suppose that $\ltdefn{T}{U}{V}$ and $\ltdefn{S}{V}{W}$ are linear transformations. Then the composition of $S$ and $T$ is the function $\ltdefn{(\compose{S}{T})}{U}{W}$ whose outputs are defined by \begin{equation*} \lt{(\compose{S}{T})}{\vect{u}}=\lt{S}{\lt{T}{\vect{u}}} \end{equation*} Given that $T$ and $S$ are linear transformations, it would be nice to know that $\compose{S}{T}$ is also a linear transformation. Theorem CLTLT (Composition of Linear Transformations is a Linear Transformation) Suppose that $\ltdefn{T}{U}{V}$ and $\ltdefn{S}{V}{W}$ are linear transformations. Then $\ltdefn{(\compose{S}{T})}{U}{W}$ is a linear transformation. Proof. Example CTLT: Composition of two linear transformations. Here is an interesting exercise that will presage an important result later. In Example STLT compute (via Theorem MLTCV) the matrix of $T$, $S$ and $T+S$. Do you see a relationship between these three matrices? In Example SMLT compute (via Theorem MLTCV) the matrix of $T$ and $2T$. Do you see a relationship between these two matrices? Here's the tough one. In Example CTLT compute (via Theorem MLTCV) the matrix of $T$, $S$ and $\compose{S}{T}$. Do you see a relationship between these three matrices???
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 123, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9080772995948792, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/189612-let-b-b-being-real-numbers-b-1-show-m-b-random-m-print.html
# let a<b with a, b being real numbers and b-a>1. Show that a<m<b for a random m Printable View • October 5th 2011, 07:02 AM juanma101285 let a<b with a, b being real numbers and b-a>1. Show that a<m<b for a random m Hi, I have the following three problems, but I only want to know what the first one is like... I am sure I will then be able to figure out the other two. The problem says: "In the following, a, b are real numbers. 1) Let b>a+1. Show that there is an integer m such that a<m<b. 2) Let a,b > 0. Show that there is an integer n such that n*a > b. 3) Let a<b. Prove that there is an integer n such that n*b > n*a+1. Conclude that there is a rational number q such that a<q<b." I would really appreciate it if you could give me a hand with the first one so that I can see how to solve the other 2 problems. Thanks a lot! • October 5th 2011, 07:39 AM HallsofIvy Re: let a<b with a, b being real numbers and b-a>1. Show that a<m<b for a random m Let S be the set of all integers, x, with x< a. That set clearly is non-empty and has a as upper bound. By the "well ordered" property of the integers, any non-empty set of integers, having an upper bound, has a largest member. Let n be the largest member of this set. What can you say about n+1? • October 5th 2011, 07:54 AM Plato Re: let a<b with a, b being real numbers and b-a>1. Show that a<m<b for a random m Quote: Originally Posted by juanma101285 The problem says: "In the following, a, b are real numbers. 1) Let b>a+1. Show that there is an integer m such that a<m<b. I would really appreciate it if you could give me a hand with the first one so that I can see how to solve the other 2 problems. Theorm: If $\alpha\in\mathbb{R}$ then $\left( {\exists j \in \mathbb{Z}} \right)\left[ {j \leqslant \alpha < j + 1} \right]$ The proof goes like this: Let $A=\{n\in\mathbb{Z}:n\le\alpha\}.$ That set exists because the integers are not bounded below. Moreover the $\sup(A)\in\mathbb{Z}$. There is our $j$. That proves that the greatest integer $\left\lfloor \alpha \right\rfloor$ exists. Now for your question. Suppose that $b>a+1$. The theorem tells us that $\left\lfloor a \right\rfloor\le a<\left\lfloor a \right\rfloor+1$. So $a<\left\lfloor a \right\rfloor+1\le a+1<b$ EDIT this is a bit more. I did not see reply #2. All times are GMT -8. The time now is 02:57 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942363977432251, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/59738/the-complex-version-of-nashs-theorem-is-not-true/59771
## “The complex version of Nash’s theorem is not true” ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The title quote is from p.221 of the 2010 book, The Shape of Inner Space: String Theory and the Geometry of the Universe's Hidden Dimensions by Shing-Tung Yau and Steve Nadis. "Nash's theorem" here refers to the Nash embedding theorem (discussed in an earlier MO question: "Nash embedding theorem for 2D manifolds"). I would appreciate any pointer to literature that explains where and why embedding fails. This is well-known in the right circles, but I am having difficulty locating sources. Thanks! - ## 4 Answers The failure is actually more profound than you might guess at first glance: There are conformal metrics on the Poincare disk that cannot (even locally) be isometrically induced by embedding in $\mathbb{C}^n$ by any holomorphic mapping. For example, there is no complex curve in $\mathbb{C}^n$ for which the induced metric has either curvature that is positive somewhere or constant negative curvature. You can get around the positivity problem by looking for complex curves in $\mathbb{P}^n$ (with the Fubini-Study metric, say), but even there, there are no complex curves with constant negative curvature. More generally, for any Kahler metric $g$ on an $n$-dimensional complex manifold $M$, there always exist many metrics on the Poincare disk that cannot be isometrically induced on the disk via a holomorphic embedding into $M$. This should not be surprising if you are willing to be a little heuristic: Holomorphic mappings of a disk into an $n$-dimensional complex manifold essentially depend on choosing $n$ holomorphic functions of one complex variable and each such holomorphic function essentially depends on choosing two (analytic) real functions of a single real variable. However, the conformal metrics on the disk depend essentially on one (positive) smooth function of two variables, which is too much `generality' for any finite number of functions of a single variable to provide. - 1 I am quite willing to be more than a little heuristic under such brilliant guidance, Robert! Thanks for so much enlightenment in so few lines. – Georges Elencwajg Apr 2 2011 at 18:37 1 i really like the hueristic argument too! – Colin Tan Apr 19 2011 at 12:22 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Using the maximum modulus principle you can show that $\mathbb{C}^n$ doesn't have any compact complex submanifolds of positive dimension. It follows that lots of complex manifolds, such as complex grassmannians and projective spaces for example, do not embed into $\mathbb{C}^n$. - 1 Perfect---Thank you!! – Joseph O'Rourke Mar 27 2011 at 18:21 1 Nitpick: Stein manifolds are those that admit a proper embedding into $\mathbb C^n$. An open subset of $\mathbb C^n$ is not necessarily Stein. – jvp Mar 29 2011 at 11:45 1 Moreover, Nash theorem is about isometric embedding... – diverietti Apr 4 2011 at 19:31 jvp & diverietti: Thanks for the comments. I've removed my remark about Stein manifolds. – Faisal Apr 4 2011 at 21:48 As Faisal says, there is no hope to have a general Nash-type theorem for all complex manifold, when the ambient space considered for the (isometric) embedding is some $\mathbb C^N$: no compact complex manifold of positive dimension could admit it. On the other hand there are a lot of compact complex manifold of positive dimension. Restricting the attention to the compact case then, one may guess if there is a natural analogous of the Nash embedding theorem. For instance, one may wonder if given any compact hermitian manifold $(X,\omega)$ one can embed it isometrically into some projective space, endowed with its natural Fubini-Study metric. It turns out that there are several restrictions both of analytic and geometrical nature. For instance, the starting metric $\omega$ must then be a Kähler form (as a restriction of the Fubini-Study Kähler form). It then gives a nonzero cohomology class in $H^{1,1}(X,\mathbb R)\subset H^2(X,\mathbb R)$. Moreover, being the Fubini-Study metric the Chern curvature form of the (anti)tautological line bundle, its cohomology class must be integral, that is in the image $H^{1,1}(X,\mathbb Z)$ in $H^{1,1}(X,\mathbb R)$ of the natural inclusion $H^2(X,\mathbb Z)\subset H^{2}(X,\mathbb R)$. The celebrated Kodaira's embedding theorem states that the converse is in fact true: Let $X$ be a compact complex manifold of dimension $\dim X=n$ possessing an integral $(1,1)$-cohomology class $[\omega]\in H^{1,1}(X,\mathbb Z)$ such that $[\omega]$ can be represented by a closed positive smooth $(1,1)$-form $\omega$. Then, there is an embedding $\iota\colon X\hookrightarrow \mathbb P^N$ to some complex projective space (and a posteriori $X$ is indeed algebraic by Chow's theorem). This embedding is obtained as follow. Given $[\omega]$, there exists a holomorphic hermitian line bundle $L\to X$ with hermitian metric $h$ such that $c_1(L)=[\omega]$ and moreover the Chern curvature form is such that $\frac i{2\pi}\Theta(L,h)=\omega$. Then, for all $m\in\mathbb N$, one considers the holomorphic map ```$$ \begin{aligned} \varphi_{|L^{\otimes m}|}\colon & X\setminus\operatorname{Bs}(L^{\otimes m})\to\mathbb P(H^0(X,L^{\otimes m})^*) \\ & x\mapsto\{\sigma\in H^0(X,L^{\otimes m})\mid \sigma(x)=0\}, \end{aligned} $$``` where $\operatorname{Bs}(L^{\otimes m})$ is the set of points of $X$ where all sections in $H^0(X,L^{\otimes m})$ vanish simultaneously. What can be shown is in fact that for all sufficiently big $m$, one has that $\operatorname{Bs}(L^{\otimes m})=\emptyset$ and $\varphi_{|L^{\otimes m}|}$ is an immersive homeomorphism onto its image. Moreover, denoting by $\mathcal O(1)$ the (anti)tautological line bundle on $\mathbb P(H^0(X,L^{\otimes m})^*)$, one has that $$\varphi_{|L^{\otimes m}|}^*\mathcal O(1)\simeq L^{\otimes m}.$$ Now, $H^0(X,L^{\otimes m})$ has a natural inner product, namely the $L^2$-product given by $$\langle\langle\sigma,\tau{}\rangle\rangle_{L^2}=\int_X\langle\sigma(x),\tau(x)\rangle_{h^{\otimes m}}\frac{\omega^{n}}{n!},$$ and thus we have a corresponding Fubini-Studi metric on $\mathbb P(H^0(X,L^{\otimes m})^*)$; call it $\omega_{FS}$. Then, $\varphi_{|L^{\otimes m}|}^*\omega_{FS}$ lies in the same cohomology class of $c_1(L^{\otimes m})=m\cdot c_1(L)$. In particular, $\omega$ and $\frac 1m\varphi_{|L^{\otimes m}|}^*\omega_{FS}$ are cohomologous and since $X$ is Kähler by the $\partial\bar\partial$-lemma we have that $$\omega-\frac 1m\varphi_{|L^{\otimes m}|}^*\omega_{FS}=\frac i{2\pi}\partial\bar\partial f$$ for some globally defined smooth function $f\colon X\to\mathbb R$. This means that the new hermitian metric $\tilde h=he^{f}$ on $L$ has Chern curvature equal to $\frac 1m\varphi_{|L^{\otimes m}|}^*\omega_{FS}$ and with this new metric $\varphi_{|L^{\otimes m}|}$ becomes (after rescaling by the factor $m$) an isometric embedding (of course this is almost tautological: the only point here is that we modify the original metric just by rescaling the hermitian metric on the line bundle whose curvature was our original metric). Note that we didn't obtain an isometric embedding for the original metric. The best you can do we the original metric is to approximate it in the $C^2$-topology by the sequence of metrics $(\frac 1m\varphi_{|L^{\otimes m}|}^*\omega_{FS})_{m\in\mathbb N}$ by a result contained in the PhD thesis of G. Tian (please see the answer below by Joel Fine for more on that) (the convergence is now known to be in the $C^\infty$ topology on the space of symmetric covariant $2$-tensors, cf. the comment of Joel Fine here below). Remark also that not any compact Kähler manifold admits such a cohomology class, thus the theorem "really give a criterion". For instance a generic compact complex torus of complex dimension greater than or equal to two is a compact Kähler manifold (with its natural flat metric) which does not admit any embedding in some projective space. Turing to the literature, regarding the compact case there are several wonderful books in the literature. I'll give you two or three names, which are my favorites. (1) J.-P. Demailly, "Complex Analytic and Differential Geometry", (2) C. Voisin, "Théorie de Hodge et géométrie algébrique complexe". (3) R. O. Wells Jr., "Differential analysis on complex manifolds". For the non-compact case, there are also plenty of books of course. If you want to know more on the theory of Stein manifolds (precisely the analytic close submanifold of some $\mathbb C^N$), then for example (4) L. Hörmander, "An introduction to complex analysis in several variables". would do the job, at least for an introduction. [edited after comments by Joel Fine and Robert Bryant] - 1 @diverletti: Check your statement of the Kodaira embedding theorem. The embedding $\iota:X\hookrightarrow\mathbb{P}^N$ that pulls back the Fubini-Study $(1,1)$-form to be the given form is NOT holomorphic in general; it's only symplectic. The Kodaira embedding theorem is about the space of sections of `positive' bundles, and you don't mention this. If $X$ is a Riemann surface of genus $g>1$ and $\omega$ is the positive integral $(1,1)$-form that has constant negative curvature, there is NO holomorphic embedding into $\mathbb{P}^N$ that induces an integral multiple of $\omega$. – Robert Bryant May 9 2011 at 15:19 Dear Robert, I think that my statement is definitely true. You just make some confusion. The Kähler metrics I consider, being integral, are already seen as curvature forms of some line bundle (whose first Chern class is represented by the given Kähler form and, as such is positive, so that the line bundle itself is, by definition, positive). Your counterexample is not true because of the following reason: you consider the Kähler metric as a metric on the tangent bundle (which, in the case of Riemann surfaces is incidentally a line bundle, too) and then you take the curvature (to be continued) – diverietti May 9 2011 at 23:14 which, in the cases your consider, cannot -of course- be positive. Thus, it does not satisfy the hypotheses of my statement since being a (1,1)-form not positive it cannot be a Kähler form. What your example says, instead, is that if you normalize your Kähler form in such a way that its total volume is for instance one, then you will find a (ample) hermitian line bundle on X whose curvature is exactly your form. – diverietti May 9 2011 at 23:19 2 @diverletti: You need to look up a statement of the Kodaira embedding theorem. It does not say what you think it does. I stand by my claim that there is NO holomorphic curve (compact or not) in $\mathbb{CP}^n$ such that the Fubini-Study metric induces a metric (i.e., a positive (1,1)-form) of constant negative curvature on the curve. (It is a old theorem of Calabi, not mine.) As Joel points out below, the Kodaira embedding theorem is not about isometric embeddings. – Robert Bryant May 12 2011 at 3:59 2 I think this version of your answer is much better! Just one small technical point: the convergence of the sequence you mention is in fact in the C-infinity topology. Tian originally proved C^2 convergence but this has subsequently been improved. (I guess this extension is due to Ruan, but I'm a little hazy on the exact history. There is also important work of Zelditch and Catlin on this.) – Joel Fine Dec 21 2011 at 11:41 show 2 more comments This is not so much an answer to the original question, more an addition to the answer of diverietti and part of the answer Robert Bryant. Both mention the following analogue of Nash's embedding problem: Given a Kähler manifold $(X, \omega)$, is there a projective embedding $X \to \mathbb{CP}^n$ for which the Fubini-Study metric pulls back to give $\omega$? As Robert says, taken literally the answer is no. However, there is a beautiful theorem of Tian which says that the answer is yes, provided one is willing to let $n$, the dimension of the projective space, tend to infinity. More precisely, let $L \to X$ be a positive holomorphic line bundle on $X$. This means there is a Hermitian metric $h$ in $L$ whose curvature is a Kähler form $\omega$ in $c_1(L)$. (Moreover, all Kähler forms in $c_1(L)$ arise this way.) With this metric $h$ and the volume form $\omega^n$ you can define an $L^2$-inner-product on the space of holomorphic sections of $L^k$, where $k$ is a large integer. Let $s_0, \ldots , s_n$ be an orthonormal basis of holomorphic sections (where $n$ depends on $k$ roughly $n \sim k^m$ where $m$ is the dimension of $X$). Then (for large $k$) the map $$f_k(x) = [s_0(x) : \cdots : s_n(x) ]$$ defines an embedding to $\mathbb{CP}^n$ which has the following property: if we restrict the Fubini-Study metric from projective space to $X$ via the map $f_k$, rescale by $1/k$ (to keep the total volume fixed) and then take the limit as $k \to \infty$ we get the original metric $\omega$. - Joel, I am not sure I can understand completely your answer. For Kodaira's embedding theorem already follows that if you rescale the restriction of the Fubini-Study metric by one over the power of the line bundle you need to embed it, then it is the original metric. When does the $L^2$ metric come into the picture? Which is exactely the theorem of Tian you are talking to? – diverietti Apr 4 2011 at 19:38 Kodaira's theorem tells you that for sufficiently large $k$, the map $f_k$ gives you an embedding for any choice of basis of hol sections of $L^k$ whatsoever. If you pull back the cohomology class of the Fubini-Study metric and divide by 1/k then you get the first Chern class of L. But Kodaira's theorem says nothing at all about the pull back of the actual metric itself. To get a well defined metric via an embedding, you must specify the basis (at least up to the action of the unitary group). That's where you need the $L^2$ inner-product. Continued... – Joel Fine Apr 4 2011 at 19:55 ... If you want to recover the original metric you started with, you use an $L^2$-orthonormal basis of sections for each $k$, pull back the metric, rescale by $1/k$ and then take a limit as $k \to \infty$. This gives you a sequence of Kähler metrics on $X$ converging to the one you first thought of. This is proved by Tian in his article "On a set of polarized Kähler metrics on algebraic manifolds." J. Differential Geom. 32 (1990). – Joel Fine Apr 4 2011 at 19:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 112, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9117828607559204, "perplexity_flag": "head"}
http://www.newworldencyclopedia.org/entry/Frequency
# Frequency From New World Encyclopedia Sinusoidal waves of various frequencies; the bottom waves have higher frequencies than those above. The horizontal axis represents time. Frequency is a measure of the number of occurrences of a repeating event per unit time. It is also referred to as temporal frequency. The period is the duration of one cycle in a repeating event, so the period is the reciprocal of the frequency. Frequency is an important parameter in our understanding of waves and of wavelike phenomena, such as electromagnetism. ## Definition and units For cyclical processes, such as rotation, oscillations, or waves, frequency is defined as a number of cycles, or periods, per unit time. In physics and engineering disciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter f or by a Greek letter ν (nu). In SI units, the unit of frequency is hertz (Hz), named after the German physicist Heinrich Hertz. For example, 1 Hz means that an event repeats once per second, 2 Hz is twice per second, and so on.[1] This unit was originally called one cycle per second (cps), which is still sometimes used. Heart rate and musical tempo are measured in beats per minute (BPM). Frequency of rotation is often expressed as a number of revolutions per minute (rpm). BPM and rpm values must be divided by 60 to obtain the corresponding value in Hz: thus, 60 BPM translates into 1 Hz. The period is usually denoted as T, and is the reciprocal of the frequency f: $T = \frac{1}{f}.$ The SI (as well as English) unit for period is the second (s). ## Some parameters of a wave A wave can be described mathematically using a series of parameters including its wavelength, wavenumber, period, frequency, and amplitude. The wavelength (denoted as λ) is the distance between two successive crests (or troughs). It is generally measured on the metric scale (in meters, centimeters, and so on). For the optical part of the electromagnetic spectrum, wavelength is commonly measured in nanometers (one nanometer equals a billionth of a meter). A wavenumber, k, can be associated with the wavelength by the relation $k = \frac{2 \pi}{\lambda}$. The period, T, of a wave is the time taken for a wave oscillation to go through one complete cycle (one crest and one trough). The frequency f (also denoted as ν) is the number of periods per unit time. Frequency is usually measured in hertz (Hz), which corresponds to the number of cycles per second. The frequency and period of a wave are reciprocals of each other. Thus their mathematical relationship is: $f=\frac{1}{T}$. One complete cycle of a wave can be said to have an "angular displacement" of 2π radians—in other words, one cycle is completed and another is about to begin. Thus there is another parameter called angular frequency (or angular speed), ω. It is measured as the number of radians per unit time (radians per second) at a fixed position. Angular frequency is related to the frequency by the equation: $\omega = 2 \pi f = \frac{2 \pi}{T}$ The amplitude of a wave (commonly denoted as A or another letter) is a measure of the maximum disturbance in the medium during one wave cycle. In the illustration to the right, this is the maximum vertical distance between the baseline and the wave. The units for measuring amplitude depend on the type of wave. Waves on a string have an amplitude expressed in terms of distance (meters); sound waves, as pressure (in pascals); and electromagnetic waves, as the amplitude of the electric field (in volts/meter). The amplitude may be constant, in which case the wave is called a continuous wave (c.w.), or it may vary with time or position. The form of variation of amplitude is called the envelope of the wave. There are two types of velocity associated with a wave: phase velocity and group velocity. Phase velocity gives the rate at which the wave propagates. It is calculated by the equation: $v_p = \frac{\omega}{k} = \lambda f$ Group velocity gives the rate at which information can be transmitted by the wave. In scientific terms, it is the velocity at which variations in the wave's amplitude propagate through space. Group velocity is given by the equation: $v_g = \frac{\partial \omega}{\partial k}$ ### Relationship between frequency and wavelength Frequency has an inverse relationship to the concept of wavelength: simply, frequency is inversely proportional to wavelength λ (lambda). The frequency f is equal to the speed v of the wave divided by the wavelength λ of the wave: $f = \frac{v}{\lambda}.$ In the special case of electromagnetic waves moving through a vacuum, then v = c0 , where c0 is the speed of light in a vacuum, and this expression becomes: $f = \frac{c_0}{\lambda}.$ When waves from a monochromatic source travel from one medium to another, their frequency remains exactly the same—only their wavelength and speed change. ## Measurement ### By timing To calculate the frequency of an event, the number of occurrences of the event within a fixed time interval are counted, and then divided by the length of the time interval. In experimental work (for example, calculating the frequency of an oscillating pendulum) it is more accurate to measure the time taken for a fixed number of occurrences, rather than the number of occurrences within a fixed time. The latter method introduces—if N is the number of counted occurrences—a random error between zero and one count, so on average half a count, causing an biased underestimation of f by ½ f / (N + ½) in its expected value. In the first method, which is more accurate, frequency is still calculated by dividing the number of occurrences by the time interval; however it is the number of occurrences that is fixed, not the time interval. An alternative method to calculate frequency is to measure the time between two consecutive occurrences of the event (the period T) and then compute the frequency f as the reciprocal of this time: $f = \frac{1}{T}.$ A more accurate measurement can be obtained by taking many cycles into account and averaging the periods between each. ### By stroboscope effect, or frequency beats In case when the frequency is so high that counting is difficult or impossible with the available means, another method is used, based on a source (such as a laser, a tuning fork, or a waveform generator) of a known reference frequency f0, that must be tunable or very close to the measured frequency f. Both the observed frequency and the reference frequency are simultaneously produced, and frequency beats are observed at a much lower frequency Δf, which can be measured by counting. This is sometimes referred to as a stroboscope effect. The unknown frequency is then found from $f=f_0\pm \Delta f$. ## Examples • In music and acoustics, the frequency of the standard pitch A above middle C on a piano is usually defined as 440 Hz, that is, 440 cycles per second and known as concert pitch, to which an orchestra tunes. • A baby can hear tones with oscillations up to approximately 20,000 Hz, but these frequencies become more difficult to hear as people age. • In Europe, Africa, Australia, Southern South America, most of Asia, and Russia, the frequency of the alternating current in household electrical outlets is 50 Hz (close to the tone G), whereas in North America and Northern South America, the frequency of the alternating current is 60 Hz (between the tones B♭ and B—that is, a minor third above the European frequency). The frequency of the 'hum' in an audio recording can show where the recording was made—in countries utilizing the European, or the American grid frequency. • Visible light from deep red to violet has frequencies of 430 to 750 THz. ## Period versus frequency As a matter of convenience, longer and slower waves, such as ocean surface waves, tend to be described by wave period rather than frequency. Short and fast waves, like audio and radio, are usually described by their frequency instead of period. These commonly used conversions are listed below: | | | | | Frequency | Period (time) | |--------------|------------|-------------|-------------|-------------|-----------------| | 1 mHz (10-3) | 1 Hz (100) | 1 kHz (103) | 1 MHz (106) | 1 GHz (109) | 1 THz (1012) | | 1 ks (103) | 1 s (100) | 1 ms (10-3) | 1 µs (10-6) | 1 ns (10-9) | 1 ps (10-12) | ## Other types of frequency • Angular frequency ω is defined as the rate of change in the orientation angle (during rotation), or in the phase of a sinusoidal waveform (e.g. in oscillations and waves): $\omega=2\pi f\,$. Angular frequency is measured in radians per second (rad/s). • Spatial frequency is analogous to temporal frequency, but the time axis is replaced by one or more spatial displacement axes. • Wavenumber is the spatial analogue of angular frequency. In case of more than one space dimension, wavenumber is a vector quantity. ## Notes 1. ↑ Interestingly, 1 hertz is the approximate frequency of a human heart (Herz in German language). ## References • French, A. P. 1971. Vibrations and Waves. The M.I.T. Introductory Physics Series. New York: Norton. ISBN 0393099245 • Pain, H. J. 2005. The Physics of Vibrations and Waves. Chichester: John Wiley. ISBN 978-0470012963 • Tipler, Paul Allen, and Gene Mosca. 2004. Physics for Scientists and Engineers, 5th ed. New York: W.H. Freeman. ISBN 0716743892 • Wilson, Jerry D., and Anthony J. Buffa. 2003. College Physics. Upper Saddle River, NJ: Prentice Hall. ISBN 0130676446
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364779591560364, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/7396/what-all-is-needed-to-solve-for-the-metric-in-gr
# What all is needed to solve for the metric in GR? Einstein's field equations are: $R_{ab} - {1 \over 2}g_{ab}\,R + g_{ab} \Lambda = {8 \pi G \over c^4} T_{ab}$ And since the Ricci curvature tensor is "less information" than the Riemann curvature tensor because: $R_{ab} = R^c{}_{acb}$ and the Riccie curvature scalar is even "less information" than the Ricci curvature tensor because: $R = R^a{}_{a}$ This appears to indicate the GR field equations don't contain enough information to specify how the full Riemann curvature tensor evolves. So it looks like something missing. Is it even possible to say how the full Riemann curvature tensor evolves in GR? Since the Riemann curvature tensor can be obtained from the metric, if we can solve for how the metric evolves using the Ricci curvature tensor, we can then get the Riemann curvature tensor evolution. So the equation is equivalent to the title question, what all do we need to solve for the metric in GR? In books where they obtain the solution outside a static spherical object, they seem to always refer back to the Newtonian limit and compare to Newtonian gravity to fully fix the answer. This seems a bit scary, as it seems to suggest indeed that GR needs some other equations specified to get the answer. If I choose a coordinate system and specify the metric everywhere at some initial time, is that initial metric + the GR field equations enough to solve for the metric everywhere in spacetime? Or is there some way to use GR to get the metric without any prior geometry put in? - 1 Your question appears to be riddled with misunderstandings about basic notions in GR such as gauge invariance and the counting of degrees of freedom, and is likely to confuse a newcomer. So -1. – user346 Mar 22 '11 at 5:08 I probably am misunderstanding basic things. Can you suggest how to improve the question? (Or edit it yourself?) – John Mar 22 '11 at 5:22 2 @John: first, talk about Riemann tensor is completely off topic. You should know that you can determine it from the metric, so it's not an independent information. Also, look at Einstein's equations: how many of them there are and how many degrees of freedom do you have in metric. Do the numbers match? – Marek Mar 22 '11 at 8:41 1 I have Carroll and the large Gravitation book. They both start out nice and slow, and build upon step after step. But eventually I get to a point where I stop and look around and don't understand what the physics content is anymore. It is not clear to me what it even means to take a measurement in such broad notions, so likewise it is not clear to me what is even measurable anymore. The "exploring black holes" book by Taylor is much easier and just static spacetime, but stops to debate any bizarre details along the way. I wish there was a simpler bridge from that to full GR. Any suggestions? – John Mar 22 '11 at 10:51 1 For those reasons, I have sympathy for the people historically that thought gravitational waves may just be a coordinate effect and have no physical meaning. For me, it's just plain hard to grab onto what is "physical" in GR. So I can easily see how they got confused too. – John Mar 22 '11 at 10:56 show 3 more comments ## 1 Answer Dear John, let me post the same thing that Marek has said as a standard answer. Einstein's equations are not equations for the Riemann tensor because the Riemann tensor's components are not independent fields. Instead, Einstein's equations are differential equations for the metric tensor. In 4 dimensions, the metric tensor has 10 components - a symmetric tensor - and Einstein's equations have 10 components - a symmetric tensor - too. It doesn't matter that the Riemann tensor has 20 components because these 20 functions of space and time are calculated from the 10 component functions of the metric tensor and its (first and second) derivatives. In fact, the 10-10 counting is oversimplified. Four "differential combinations" of Einstein's equations vanish identically because $\nabla_\mu R^{\mu\nu}=0$ is an identity (that always holds, even if the equations of motion are not satisfied). The same identity holds for the corresponding other tensors that are added to the Ricci tensor in Einstein's equations. So instead of 10 equations, the Riemann equations are, in some sense, just 6 independent equations. That means that they don't determine the metric completely: they leave 4 functions undetermined and these are exactly the 4 functions that you may choose arbitrarily to specify a diffeomorphism, mapping one solution into another (equivalent) solution. Up to the coordinate transformations which are always allowed to be made, initial conditions for the metric and its first derivative determine the metric tensor - and therefore the whole Riemann tensor - everywhere in the future. One doesn't need any Newtonian equations as a "mandatory supplement" in general relativity. That doesn't mean that the Newtonian limit is unimportant: of course, it is one of the most important approximate consequences of general relativity. - I'm not sure I'm entirely understanding. Let me try to summarize to see if I am getting it. You are saying that the field equations allow one to find $R^{\mu\nu}$ everywhere, and this can be used to obtain a unique physical solution of $g^{\mu\nu}$ (which is actually a class of physically equivalent solutions related by 4 functions to specify a diffeomorphism). Is that correct? So this means the Riemann curvature tensor doesn't contain more physical information, just Ricci + specific choice out of diffeomorphism related solutions? – John Mar 22 '11 at 9:29 Dear John, sorry but I think you're still not getting the basic point. Einstein's equations are differential equations for the metric tensor, not algebraic equations for the curvature tensor. If you don't understand this point, you shouldn't be using shortcuts such as $R_{ab}$ at all. Instead, you should substitute the right definition of the Ricci tensor into the equations. The equations say something like $\nabla^a \nabla_a g_{cd} + {\rm other\,\,terms} = K T_{cd}$: second-order (second derivatives) partial differential equations for the metric tensor $g_{ab}$. – Luboš Motl Mar 22 '11 at 12:16 Otherwise, the Riemann curvature tensor at one point contains 20 independent components. But if you ask about the Riemann curvature tensor in the whole spacetime, it's 20 functions of spacetime coordinates that are not not independent; they satisfy lots of identities proving that they're not independent and these identities reflect the fact that the 20 functions $R_{abcd}(x,y,z,t)$ may be written in terms of 10 functions $g_{ab}(x,y,z,t)$. The Riemann tensor is not an independent variable, and it doesn't appear as such in Einstein's equations at all. – Luboš Motl Mar 22 '11 at 12:18 2 @Lubos -- Am I confused, or do you have a typo in your answer? The thing that's covariantly conserved is the Einstein tensor, not the Ricci tensor. That is, I think you mean $\nabla_\mu G^{\mu\nu}=0$, instead of $\nabla_\mu R^{\mu\nu}=0$. – Ted Bunn Mar 22 '11 at 13:26 1 @John, there is a further complication here in that the Einstein equations (and such hyperbolic differential equns in general) do not have a unique solution ($g^{uv}$) in general. In the $T^{uv}=\Lambda=0$ case we have Minkowski $\eta$, Schwarzchild, Kerr, etc solutions - all physically different metrics. – Roy Simpson Mar 22 '11 at 14:31 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338179230690002, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/238995/are-all-projection-maps-in-a-categorical-product-epic?answertab=votes
# Are all projection maps in a categorical product epic? Are all $\Pi A_\alpha \stackrel{\pi_i}\longrightarrow A_\alpha$ projection maps epic, given that $\Pi A_\alpha$ be the product of $A_\alpha$s? Of course, assuming that the product exists. - ## 4 Answers In an arbitrary category, it suffices to have a zero object $0$. Consider $\pi_i\colon \prod A_j \rightarrow A_i$ and pick an index $n$. Then we have the identity morphism $\mathrm{id}_n\colon A_n\rightarrow A_n$ and to every other object $A_j$ we have the zero morphism $0_{nj}$. By the universal property of the product, we get a morphism $\rho_n\colon A_n\rightarrow \prod A_j$ with $\pi_n\circ \rho_n = \mathrm{id}_n$ and $\pi_j \circ \rho_n = 0_{nj}$ for all other $j$. Now suppose there exists an object $X$ and $f,g\colon A_n\rightarrow X$ with $f\circ\pi_n = g\circ\pi_n$. Then $$f\circ\pi_n\circ\rho_n = f\circ\mathrm{id}_n = f$$ and $$g\circ\pi_n\circ\rho_n = g\circ\mathrm{id}_n = g$$ Since $f\circ\pi_n = g\circ\pi_n$, we get $f=g$. The Wikipedia page on the product states that it is not true for arbitrary categories (without zero, therefore), but I'm not quick to find an example. - 2 You argument works even if the category does not have a zero object (for example in the category $\textbf{Set}$). It suffices that there exists at least one morphism between any two objects that make up the product. Then one can still construct a morphism $\rho_n$ which is a right inverse for $\pi_n$. And that implies that $\pi_n$ is an epimorphism. – PatrickR Jan 26 at 7:22 Judging by the tag I assume you are referring to the categorical product. In that case here is a small counter example showing that the categorical projections for a categorical product, when it exists, need not be epimorphic. Consider a category with objects $x,y,z$ and the following morphisms (other than the identities). There is precisely one morhpism $z\to x$ and one morhpism $h:z\to y$. It is immediate to verify that in that category $z$ is the product of $x$ and $y$. The projections here are epimorphic but that can easily be changed. Add now a fourth object $t$ together with two morphisms $f_{1,2}:y\to t$ and one morphism $g:z\to t$ with composition of these given by $f_i\circ h=g$. It is easily seen that $z$ is still the product of $x$ and $y$ (because nothing new has $z$ as codomain) but clearly the projection $h:z \to y$ is not an epimorphism. Of course you can tweak things some more to prevent the other projection from being epimorhpic. - There are counterexamples even in Set: for any non-empty set $X$, the projection $\emptyset \times X \to X$ is not epimorphic. Assuming the axiom of choice, in Set, all counter-examples involve the empty set. In a universe of sets where the axiom of choice fails, there are products $\prod_\alpha X_\alpha = \emptyset$ where none of the $X_\alpha$'s are empty sets; these would also give counterexamples. - Ah, of course, this is the most basic example. On the other hand, for sets (or topological spaces, or manifolds), always at least one projection is epic. For schemes this is not the case. – Martin Brandenburg Jan 27 at 14:18 By duality, the question is equivlent to: Are coproduct inclusions monic? The category of commutative rings provides many counterexamples, here $\sqcup = \otimes$ and $R \otimes 0 = 0$, so that $R \to R \otimes 0$ is not injective (unless $R=0$). A little bit more interesting, we have $\mathbb{Z}/2 \otimes \mathbb{Z}/3=0$, so that here both coproduct inclusions are not monic. For the algebro-geometric minded reader: There are many non-empty schemes $X,Y$ such that $X \times Y = \emptyset$, so that the projections are not epic ... epic fail! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9266533851623535, "perplexity_flag": "head"}
http://www.physicsforums.com/showpost.php?p=3843428&postcount=17
View Single Post The relevant inequality for this problem is $|f_n(x)-f(x)| \leq ||f_n-f||_{\infty}$ for all $x \in [0,1]$. But honestly, based on your threads in the homework help forums, you really lack the mathematical maturity to be dealing with convergence in function spaces. It would be worthwhile for you to go back and review basic convergence in $\mathbb{R}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939542293548584, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/2694/what-is-the-importance-of-the-collatz-conjecture
# What is the importance of the Collatz conjecture? I have been fascinated by this problem since I first heard about it in high school. From the Wikipedia article http://en.wikipedia.org/wiki/Collatz_problem: Take any natural number $n$. If $n$ is even, divide it by $2$ to get $n / 2$, if $n$ is odd multiply it by $3$ and add $1$ to obtain $3n + 1$. Repeat the process indefinitely. The conjecture is that no matter what number you start with, you will always eventually reach $1$. [...] Paul Erdős said about the Collatz conjecture: "Mathematics is not yet ready for such problems." He offered \$500 USD for its solution. My question is: how important do you consider the answer to this question, and why? And would you speculate on what might have possessed Paul Erdős to make such an offer? EDIT: Is there any reason to think that a proof of the Collatz Conjecture would be complex (like FLT) rather than simple (like PRIMES is in P)? And can this characterization of FLT vs. PRIMES is in P be made more specific than a bit-length comparison? - 5 This should probably be community wiki. – Qiaochu Yuan Aug 18 '10 at 7:51 7 Also, Erdős had a habit of offering cash prizes for solutions to his favorite problems; this was by no means limited to the Collatz problem. – Qiaochu Yuan Aug 18 '10 at 7:55 Still, is the problem as profound as his quote suggests? – Dan Brumleve Aug 18 '10 at 8:00 9 – mmyers Aug 18 '10 at 21:04 1 @Michael, I believe neither list includes Collatz because (Wikipedia to the contrary notwithstanding) Erdos never offered money for its solution. He offered money for solutions to his own problems, not for solutions to problems posed by others. – Gerry Myerson Apr 26 '11 at 13:15 show 1 more comment ## 8 Answers Most of the answers so far have been along the general lines of 'Why hard problems are important', rather than 'Why the Collatz conjecture is important'; I will try to address the latter. I think the basic question being touched on is: In what ways does the prime factorization of $a$ affect the prime factorization of $a+1$? Of course, one can always multiply out the prime factorization, add one, and then factor again, but this throws away the information of the prime factorization of $a$. Note that this question is also meaningful in other UFDs, like $\mathbb{C}[x]$. It seems very hard to come up with answers to this question that don't fall under the heading of 'immediate', such as distinct primes in each factorization. This seems to be in part because a small change in the prime factorization for $a$ (multiplication by a prime, say) can have a huge change in the prime factorization for $a+1$ (totally distinct prime support perhaps). Therefore, it is tempting to regard the act of adding 1 as an essentially-random shuffling of the prime factorization. The most striking thing about the Collatz conjecture is that it seems to be making a deep statement about a subtle relation between the prime factorizations of $a$ and $a+1$. Note that the Collatz iteration consists of three steps; two of which are 'small' in terms of the prime factorization, and the other of which is adding one: • multiplying by 3 has a small effect on the factorization. • adding 1 has a (possibly) huge effect on the factorization. • factoring out a power of 2 has a small effect on the factorization (in that it doesn't change the other prime powers in the factorization). So, the Collatz conjecture seems to say that there is some sort of abstract quantity like 'energy' which is cannot be arbitrarily increased by adding 1. That is, no matter where you start, and no matter where this weird prime-shuffling action of adding 1 takes you, eventually the act of pulling out 2s takes enough energy out of the system that you reach 1. I think it is for reasons like this that mathematicians suspect that a solution of the Collatz conjecture will open new horizons and develop new and important techniques in number theory. - 14 Dear Greg, Thank you for this very nice answer. – Matt E Nov 17 '10 at 4:14 What delights me most about the Collatz conjecture is your observation about what the iteration does to the factorizations combined with an observation on the sizes of the numbers. Multiplication by 3 and adding 1 more than triples the number, while dividing by 2 only halves it. If you ended up doing a large number of iterations to compute the sequence, and each was equally likely, then you should expect to see exponential growth in the terms of the sequence over the long term. This is why I don't quite believe the conjecture myself, but love that a counterexample is elusive. – Barry Smith Apr 26 '11 at 19:43 11 @Barry Smith: But after a triple-and-add-1 step you're guaranteed to divide by 2 at least once, so you actually expect a ~19% decline rather than a ~34% increase. – Charles Apr 26 '11 at 20:36 Ah yes, true. So now I guess I believe it! Unfortunately, it has now become slightly less interesting to me (but not too much). – Barry Smith Apr 26 '11 at 20:58 10 Just to highlight the effect that the factorization of $a$ and $a+1$ can be radically different, consider the largest known prime number, a Mersenne prime, $a=2^{43,112,609}-1$. $a+1=2^{43,112,609}$. – Jackson Walters Feb 2 '12 at 17:25 show 2 more comments The Collatz conjecture is the simplest open problem in mathematics. You can explain it to all your non-mathematical friends, and even to small children who have just learned to divide by 2. It doesn't require understanding divisibility, just evenness. The lack of connections between this conjecture and existing mathematical theories (as complained of in some other answers) is not an inadequacy of this conjecture, but of our theories. This problem has led directly to theoretical work by Conway showing that very similar questions are formally undecidable, certainly a surprising result. The problem also relates directly to chaotic cellular automata. If you look at a number in base 6, you will see that multiplying by 3 and dividing by 2 are the same operation (differing only by a factor of 6, i.e. the location of the decimal point), and the operation is local: each new digit only depends on two of the previous step's digits. Using a 7th state for cells that are not part of the number, a very simple cellular automaton is obtained where each cell only needs to look at one neighbor to compute its next value. (Wolfram Mathworld has some nonsense about a CA implementation being difficult due to carries, but there are no carries when you add 1, because after multiplying by 3 the last digit is either 0 (becomes a non-digit because number was even so we should divide by 6) or 3 (becomes 4), so there are never any carries.) It is easy to prove that this CA is chaotic: If you change the interior digits in any way, the region of affected digits always grows linearly with time (by $\log_6 3$ digits per step). This prevents any engineering of the digit patterns, which are quickly randomized. If the final digit behaves randomly, then the conjecture is true. Clearly any progress on the Collatz conjecture would immediately have consequences for symbolic dynamics. Emil Post's tag systems (which he created in 1920 expressly for studying the foundations of mathematics) have been studied for many decades, and they have been the foundation of the smallest universal Turing machines (as well as other universal systems) since 1961. In 2007, Liesbeth De Mol discovered that the Collatz problem can be encoded as the following tag system: $\begin{eqnarray} \hspace{2cm} \alpha & \longrightarrow & c \, y \\ \hspace{2cm} c & \longrightarrow & \alpha \\ \hspace{2cm} y & \longrightarrow & \alpha \alpha \alpha \\ \end{eqnarray}$ In two passes, this tag system processes the word $\alpha^{n}$ into either $\alpha^{n/2}$ or $\alpha^{(3n+1)/2}$ depending on the parity of $n$. Larger tag systems are known to be universal, and any progress on the 3x+1 problem will be followed with close attention by this field. In short the Collatz problem is simple enough that anyone can understand it, and yet relates not just to number theory (as described in other answers) but to issues of decidability, chaos, and the foundations of mathematics and of computation. That's about as good as it gets for a problem even a small child can understand. - So many mathematicians, and famous ones among them, have tried various ways to attack this problem, and it is still as elusive as it was when first posed. So the importance of the problem is that genuinely new mathematical ideas will have to be created to solve it, and such ideas may be helpful in other domains where "truly important" problems are at stake. Note that Erdős himself has said something along the lines that "we don't have the mathematics yet to solve this problem". - 4 +1: Even though the problem might seem irrelevant, attempts to solve it might spring up new important branches in mathematics. For instance, consider Fermat's Last Theorem. – Aryabhata Aug 18 '10 at 13:39 How much description will such a genuinely new idea require? – Dan Brumleve Sep 1 '10 at 6:44 According to the Wikipedia, Erdös said "Mathematics is not yet ready for such problems." – lhf Nov 16 '10 at 17:44 The Erdős quote referred to in this answer (and in the Nov16 comment) was already correct and highlighted in the original post... – Matt Apr 21 '11 at 6:34 I do not think this is a conceptually important problem. It is an example of a down-to-earth problem that can be checked numerically up to a large value and it has resisted a solution for many years. Not all such problems are automatically important (e.g., nonexistence of odd perfect numbers). An analogy with the significance of Fermat's last theorem is apt. Before the link was made between FLT and deep conjectures of elliptic curves, there was no over-arching significance to knowing whether or not FLT was true. (I think the link between FLT and the abc-conjecture was made at around the same time.) Yes, work on FLT was responsible for useful developments in algebraic number theory, but all the same it was not clear for a long time that settling the problem, or rather finding a counterexample, would have any other repercussions. If tomorrow someone showed that the Collatz conjecture were a consequence of the abc-conjecture or some other recognizably important unsolved problem, then I would change my mind about its importance (because, as with the link between the modularity conjecture and FLT, a counterexample to Collatz would then have real implications elsewhere in mathematics). But as long as it stays isolated, having no implications to other problems, I don't think on its own it is a mathematically profound question. The same goes for odd perfect numbers: unless someone shows the existence of an odd perfect number has effects elsewhere that we do not expect (like a counterexample to FLT having a very unexpected implication for elliptic curves), I don't think the mainstream would consider odd perfect numbers to be important either. On the pedagogical side, however, I will definitely grant that this is a nice problem to show students unfamiliar with advanced mathematics that there really are unsolved math problems. People don't necessarily realize this, e.g., they may think that everything can be solved by computers or something. - – Dan Brumleve Aug 24 '10 at 7:40 2 It is far more down-to-earth than the non-existence of odd perfect numbers. Any child who can multiply by 3 and divide by 2 can understand this problem and wonder about it. – Matt Apr 21 '11 at 6:40 I believe the $3x+1$ problem is considered a test case for ergodic theory, i.e., proving that certain probabilistic expectations true are true for orbits of a specific system. There are papers by Sinai, Lagarias and others giving probabilistic models, similar to the asymptotic predictions made in questions about distribution of prime numbers, where the predictions are reliable but proving them in any particular case (twin primes, $n^2+1$, Goldbach, ...) is a centuries-old open problem. This is analogous to transcendence or irrationality proofs of specific numbers: proving that what is "true with probability 1" really holds in a particular instance is extremely hard and pushes the theory forward. Other than that there are no problems outside the $3x+1$ conjecture that use the same iteration, so it is a sink and not a source in the applications-of-theory graph. - I agree. There are plenty of problems where the solution is more important than the result. – AD. Nov 16 '10 at 21:55 I can't add to the question: "what is the importance?", but there is one more aspect which was not yet mentioned. This is the relation of powers of 3 and powers of 2. A subproblem of the question of cycles in the Collatz leads to a critical inequality where the possibility of such cycles depends on the relative distance of perfect powers of 2 to perfect powers of 3. This can also be expressed in terms of approximation of log(3)/log(2) to rational numbers. Kurt Mahler had studied this in terms of his notion of z-numbers (but had only partial success); but it is related also to an unsolved detail in the Waring-problem where the rational approximation of powers of 3/2 to integers is the focus of a conjecture. Hmm, I don't know whether it is meaningful to go more in detail here, I tend to think: no; the idea of the critical inequality was considered by Ray Steiner, and later by John Simons and Benne de Weger; I've linked to an article of the latter in the wikipedia-article on the collatz-problem. A discussion of mine of the approximation-problem is here (but should possibly get enriched with more context). - 1 "nearness (? english word)" : proximity – joriki Aug 1 '11 at 22:22 Yes, thanks. Why did I not remember the words "distance" or "difference"... – Gottfried Helms Feb 23 '12 at 8:33 Paul Erdős offered cash prizes for the solution to problems according to his assessment of their difficulty and importance. I believe his judgement of the difficulty of this problem has been shown to be correct by the fact that it remains open. I think this problem is very important in the sense that a large proportion of the people reading this response will have had a go at it, at one time or another. Thus its solution would be of interest to many. Another reason for its importance is that, like Fermat's Last Theorem (Wiles's Theorem), it's easy to state and understand and thus has the potential to attract young people towards mathematics. I too learned about it in high school and could not resist its allure. - I would say that it is a very interesting question, so simple and yet so hard to prove. That is what all important questions should be like, but compared to "Why are we here?", we might actually get an answer. At the same time, one can spend a great deal of time trying to prove it, maybe Erdős spent too much time on it, so he just wanted a proof of it so that he could move on. - ## protected by Zev Chonoles♦Mar 8 '12 at 17:15 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9619326591491699, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/8227/a-mirror-flips-left-and-right-but-not-up-and-down/8230
# A mirror flips left and right, but not up and down Why is it that when you look in the mirror left and right directions appear flipped, but not the up and down? - 2 On which plane are your (2) eyes oriented? – user4577 Jul 21 '11 at 19:00 It COULD flip you upside down, depending on the type of mirror (flat vs concave vs convex) – MGZero Jul 22 '11 at 3:11 3 And yet a lens turns the image upside down... – Digikata Jul 22 '11 at 4:11 The issue with a normal mirror is an interesting puzzle, but I think the issue with a (round) double-convex lens makes the asymmetry even more obvious and striking. With the flat mirror you can kind of convince yourself that it is a bias of the left-right orientation of your human eyes, but with a round convex lens mirror there's (seemingly) nothing to explain the left-right swap, even though the lens itself is symmetrical... – Mark J Sep 4 '12 at 20:04 ## 14 Answers Here's a video of physicist Richard Feynman discussing this question. Imagine a blue dot and a red dot. They are in front of you, and the blue dot is on the right. Behind them is a mirror, and you can see their image in the mirror. The image of the blue dot is still on the right in the mirror. What's different is that in the mirror, there's also a reflection of you. From that reflection's point of view, the blue dot is on the left. What the mirror really does is flip the order of things in the direction perpendicular to its surface. Going on a line from behind you to in front of you, the order in real space is 3. Dots 4. Mirror The order in the image space is 1. Mirror 2. Dots Although left and right are not reversed, the blue dot, which in reality is lined up with your right eye, is lined up with your left eye in the image. The key is that you are roughly left/right symmetric. The eye the blue dot is lined up with is still your right eye, even in the image. Imagine instead that Two-Face was looking in the mirror. (This is a fictional character whose left and right side of his face look different. His image on Wikipedia looks like this:) If two-face looked in the mirror, he would instantly see that it was not himself looking back! If he had an identical twin and looked right at the identical twin, the "normal" sides of their face would be opposite each other. Two-face's good side is the right. When he looked at his twin, the twin's good side would be to the original two-face's left. Instead, the mirror Two-face's good side is also to the right. Here is an illustration: So two-face would not be confused by the dots. If the blue dot is lined up with Two-Face's good side, it is still lined up with his good side in the mirror. Here it is with the dots: Two-face would recognize that left and right haven't been flipped so much as forward and backward, creating a different version of himself that cannot be rotated around to fit on top the original. - – Lucas Jul 22 '11 at 2:43 Because they don't flip left with right (or up with down), they flip the 3D space you're standing in "inside out", so far from the mirror becomes far away inside the mirror and vice versa. A hand 1 meter from the mirror seems like it's 1 meter on the other side of the mirror but in the same spot with regards to left/right so nothing is flipped. Wiggle your left hand - you'll see the hand which is to the left in the mirror wiggle. Wiggle your toes and the toes in the mirror image wiggle etc. - 14 I was asked this question by a fellow Junior Fellow at Harvard who was convinced that it had to be the most important open question in early 21st century physics. ;-) By the way, there is one helpful twist. If you put a mirror on the ceiling above you, or the floor beneath you, it will reflect up and down, not left and right. Mirrors always reflect/exchange "in front of the mirror" and "behind the mirror". If we stand vertically, we just imagine that the person behind the mirror (the real image) is a real man and we rotate him by 180 degrees around the vertical axis. – Luboš Motl Apr 7 '11 at 20:48 7 Once we rotate the man in the mirror by 180 degrees around the vertical axis. he looks like a normal man ready to shake our hand. But the rotation by 180 degrees was artificially added. If $z$ is up-down and $x$ is left-right and $y$ is front/rear of the mirror, then the mirror which is in the $xz$ plane maps $y\to -y$ and keeps $x,z$. However, we can easily imagine rotations around vertical axes to make the mirror man look like the original man. Such a rotation changes the signs of $x,y$, so the sign flip of $y$ is replaced by a sign flip of $x$ - but the real transform only affects $y$! – Luboš Motl Apr 7 '11 at 20:50 1 I created/associated my account just to say that the first part of your explanation example (mirror on the ceiling) really clarifies well, you should post it as an answer! The second part is a little more unclear, actually. – Kzqai Apr 8 '11 at 13:34 You don't answer the question. – Kris Van Bael Oct 9 '11 at 7:15 This common confusion stems from our familiarity with photographs. We forget that we rotate them to face ourselves. Take a picture of yourself and hold it up in front of you. Probably you are holding it so that you can see your image. If so, you "flipped" the image of yourself when you rotated it 180 degrees around the vertical axis. When you look to the left side of the photo, you are looking over the right shoulder of your image. These directions are flipped! Now look in a mirror. When you look to the left, you are looking over the left shoulder of your image. These directions are not flipped! Now pick up the picture again and turn it so it's facing the same direction you are facing. You have removed the 180 degree rotation so that you and your image are "looking" in the same direction. The left side of your image is again to your left. If the picture is transparent enough that you can see your image, you'll see not the back of your head, but your eyes, giving you the impression that you're looking back at yourself. A mirror image! But again, left and right are not flipped. When you say the mirror "flips" left and right, you are speaking from the frame of reference of one who is used to the 180 degree rotation that you apply to view an opaque photograph. But that's what we all do because we consider photographs, rotated 180 degrees to face ourselves, as being the "correct" left-right orientation. What a mirror really flips is the depth dimension. That which is behind you appears to be in front of you. - Great answer! I just imagine a photon emitting from my right eye, bouncing off the mirror, and then returning to my right eye. Nothing "flipped" there. The eye in the photograph that's directly across (or closest to) my right eye is actually my left eye, because I've flipped the photograph 180 degrees from the original direction of the camera, just as if the photons on the mirror were somehow "frozen", and someone rotated the mirror 180 degrees to face them. – webXL Jul 21 '11 at 20:55 The simple example: Write a few words on a transparent piece of plastic of some sort, then hold it up in front of yourself while looking in a mirror. You can read the text just fine in front of you, or in the mirror. – fennec Jul 22 '11 at 1:12 2 Exactly. A mirror does not flip left and right. It is we who imagine another person in the mirror and change our frame of reference by "flipping" it. When I first read about mirrors flipping things (I was probably four or five), I found it confusing because I looked at the mirror and nothing was flipped. (It was only later that I learned to do the flipping with photographs and other people -- before that, I'd use to speak of the right hand of someone facing me as the left hand, because it was indeed to my left.) – ShreevatsaR Dec 27 '11 at 6:56 Take a picture and look at it. Now turn the picture to face the mirror. Question one: who flipped the picture? Answer: you did. Now, face the picture back to you, and walk to the nearest refrigerator. Turn the picture to face the refrigerator. Wow! Refrigerators flip images too! Don't believe me? Take your flipped page and hold it up to a bright light. The image is flipped; no mirror required. Now, most people will turn a page around the vertial axis when they want to face it away from themselves. However, you could flip the page around any axis you choose, as long as it's in the plane of the page. You could easily, for example, flip the page around the horizontal axis. If you still believe that mirrors flip images, you'll notice that you've now tricked the mirror into flipping the image top-to-bottom, not left-to-right. Flip the page around a diagonal axis, and you'll get a very different result. Bottom line: mirrors don't flip images; people do. - First, lets separate the concepts; there is nothing that is "flipped" in the mirror image regarding one orientation more than others. the full group of transformations $O(3)$ includes transformations where $det(R) = -1$. You can consider the following transformations examples of this: 1) they have one random direction flipped in sign, or 2) for the special case where $D= 2n+1$, you can flip all direction signs moreover, all these rotations are equivalent to each other, that is, any of them is equivalent to any other by a normal rotation $det(R) = 1$ explained in other way, the mirror image is also equivalent to your image with up and down flipped. You only think in the equivalent with the right-left flipped version because its easier to build in your mind. That might be related with the fact that our bodies are almost right-left symmetrical, but very poorly up-down symmetrical - It's easier with images... The mirror doesn't flip left and right as you can see in the upper image. The so-called flip occurs when somebody in the real world rotates 180 degrees about the vertical axis to see you face to face, as can be seen in the lower image. Regards Hans - Imagine a mirror placed on the floor. Your face, looking at it, will be pointed downwards; but the face "in the mirror" is "looking" up. The other answers here are great — I just wanted to make sure we discard the assumption that the mirror is placed on a wall! - a lake is much better, natural example, in this case. – arivero Aug 3 '11 at 16:33 Think about where a point above, below, left, and right of your point of view are in the reflection. Your head is still on top, your feet still on the bottom in the mirror. Likewise, your left hand is still to the left and your right hand to the right. It seems flipped because, to look behind you, you are used to turning around (which swaps left/right), rather than flipping (which swaps top/bottom). If we flipped upside down instead of turning around to see behind us, you would experience the opposite effect. - Here's how I explain it (this is pretty similar to most of the other answers). Assume the mirror is hanging vertically on a wall, and you're standing upright and facing it, looking at your own reflection. (Just to make the assumptions explicit.) And let's assume that you're facing north, and wearing a watch on your left wrist. The mirror doesn't flip left and right; it flips forward and back. Your front is farther north than your back, but your reflection's front is farther south than its back. Your feet and the reflection's feet both face down. The wrist with the watch is on the west side, both for you and your reflection. So why do we think that the mirror flips left and right? I think it's because we mentally map the (mathematically simple but physically awkward) transformation to something that makes physical sense. The "simple" front-to-back reversal is mentally mapped to (a) a 180-degree rotation (something that people do all the time), and (b) a left-to-right reversal (which people don't do, but since we're pretty much bilaterally symmetric, it doesn't seem as strange). So the person you see in the mirror looks like a normal person who happens to wear his watch on his "right" wrist and part his hair on the other side. If we weren't bilaterally symmetric, we might see it differently. - Because your eyes are set left and right and how you relate to the mirror. When you turn a picture to reflect in a mirror, you usually turn it left or right. If you turned it vertically, the up and down reflection will be opposite, but left and right will still be the same. - 1 It also works with one eye. (although I agree with the second part of your answer) – Kris Van Bael Oct 9 '11 at 7:10 Left and right is relative to you only but not to the rest of your environment (including the mirror), but up and down is static for the whole environment including you and the mirror. - All you have to do is imagine the line y=x+1 on the cartesian plane. If you reflect the line across a mirror in the vertical direction (across the y-axis), then the line becomes y=-x+1. Therefore, a vertical mirror flips the line's horizontal coordinates. However, if you reflect the lines across a mirror is in the horizontal direction (across the x-axis), then the line becomes y=-x-1. A horizontal mirror flips the line's vertical coordinates. The mirror just flips the axis perpendicular to its orientation! - I can't resist: Here are two real experiments I came up with to explore the assertion that mirrors invert left-to-right but not up-to-down. Fans of the scientific method (and of Douglas Adams) know that the first step in any good experiment is to figure out exactly what the question (or hypothesis) really is. So, if you like having "obvious" assumptions shaken up a bit, please read on! 1. On an index card, write your name or some text in large block letters. Next, use one hand to hold it just in front of you as if it were a name badge at a meeting. With your other hand, hold a mirror in front of you so that you can read your "name badge." The text will look inverted left-to-right, as expected, while up-and-down looks normal, also as expected. Next, move the mirror upwards until it is straight above you, while tilting the bottom of the card out just enough so you can read it in the mirror. Result: You will still see the card text reversed left-to-right, as before. But think for a second: You are also seeing yourself standing on your head. That's top-to-bottom inversion, so you are getting both types of inversion at once! So, is the lack of vertical inversion really inherent in the mirror question, or have you just been holding the mirror in the wrong place all these years? And if so, why does the choice of where to place the mirror give one inversion in one case, and two in the other? 2. In the second experiment, hold the same card at arms length in front of you. Have it facing towards you this time so you an easily read it. Next, use your other arm to position a small mirror to the left or right of your line of sight of the card, and adjust the mirror until you can see the card in it. You will see the card text reversed left-to-right, just as with a face-on mirror. Now start moving the mirror in a circular arc until it is above the line of sight instead of to the left or right, keeping the reflected image of the card visible as you do so. When the mirror reaches the top, what do you see? Probably not what you expected: The text now has normal left-to-right order, but each letter is flipped upside down! In other words, the missing "vertical inversion" that everyone worries about in mirror experiments is right there in front of you. If you don't believe me, try it; there's no ambiguity in the effect. And if you want something more to ponder, try to figure out exactly when and how the vertical-is-normal, left-to-right-is-flipped image of the text transforms into the vertical-is-flipped, left-to-right-is-normal image during the rotation of the mirror. You can watch every step of the process as it occurs! So, with these two experiments, it's worth asking: Is it really true that mirrors always reverse left-to-right? The earlier answer that references Feynman's video discusses this same point nicely, so be sure to look at that if you have not already. I'll likely comment more myself at a later time. But for now, I just wanted to provide a couple of easy, fast experiments you can try yourself, ones that may make you ponder just what the question really is. (The answer of course is 42 -- that much we already know!) - The mirror is not inverting the image in either direction but rather is acting like a rubber stamp doing a 1:1 transfer from surface to surface. If you wrote a word on a rubber stamp with fresh ink, then pressed the stamp to paper, the word on the paper would be backwards but not upside down. It's just a 1:1 transfer. - ## protected by David Zaslavsky♦Jun 5 '12 at 1:51 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547072052955627, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/51915/can-one-raise-indices-on-covariant-derivative-and-products-thereof
# Can one raise indices on covariant derivative and products thereof? Can the following be true? 1. $g^{\sigma\rho}\nabla_{\rho}\nabla_{\mu} = \nabla^{\sigma}\nabla_{\mu}$ 2. $g^{\sigma\rho}\nabla_{\nu}\nabla_{\sigma} = \nabla_{\nu}\nabla^{\rho}$ 3. $g^{\sigma\rho}\nabla_{\nu}\nabla_{\mu}T_{\sigma\rho} = \nabla_{\nu}\nabla_{\mu}T$ - 2 yes, it is the inherent property (definition) of covariant derivative construct. – Grisha Kirilin Jan 22 at 21:28 ## 1 Answer 1. This is true - in fact you could define $\nabla^\sigma = g^{\sigma\rho} \nabla_\rho$. 2. I assume this meant to say $$g^{\sigma\rho} \nabla_\nu \nabla_\sigma = \nabla_\nu \nabla^\rho.$$ Again, this is true, but for a slightly less trivial reason than (1). To employ (1) to prove this, you need to be able to switch $g^{\sigma\rho}$ with $\nabla_\nu$, which you are able to do because one of the axioms we start with when defining the covariant derivative is that it commutes with the metric (i.e., the metric has vanishing covariant derivative, so that other term in the product rule drops out). 3. This also holds, following the same reasoning as in (2). - 1 To add to what Chris said, raising and lowering indices with the metric can be done on the indices of any tensor. Since the covariant derivative of a tensor is a new tensor with an additional index, raising its indices is a special case of this fact. – joshphysics Jan 22 at 21:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331648945808411, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/35537/metalogic-prove-neg-p-1-to-p-2-vdash-neg-p-2-to-p-1-without-using-t
Metalogic: Prove $\{\neg P_1 \to P_2\} \vdash (\neg P_2\to P_1)$ without using the Deduction Theorem I want to prove that $$\{\neg P_1\to P_2\} \vdash (\neg P_2\to P_1)$$ without using the Deduction Theorem. I'm not sure how to proceed. The class notes are all we have to work from, no text to work on similar proofs. - 1 What axioms and deduction rules are you working from? – Qiaochu Yuan Apr 28 '11 at 0:47 only modus ponems – Daniel Lopez Apr 28 '11 at 1:26 and my 3 axiom schemas which are 1. B->(A->B) 2. (~A->B)->((~A->~B)->A) 3. (A->(B->C))->((A->B)->(A->C)) – Daniel Lopez Apr 28 '11 at 1:27 Please try to put the question in the body of your message; would you require a reader of a book to look up important information on the spine? – Arturo Magidin Apr 28 '11 at 2:14 3 I was asked a similar question once, and I responded by going through the proof of the deduction theorem and writing down the steps it gives for this particular case. This is a good exercise in understanding how the deduction theorem works, and technically you have not cited the deduction theorem to do it. – Qiaochu Yuan Apr 28 '11 at 3:27 2 Answers I will use the axioms you wrote on the comment, namely: 1. $B\to(A\to B)$ 2. $(\lnot A\to B)\to((\lnot A\to\lnot B)\to A)$ 3. $(A\to(B\to C))\to((A\to B)\to(A\to C))$ We want to prove $\{\lnot P_1\to P_2\}\vdash\lnot P_2\to P_1$. A proof is a finite sequence of sentences that each of them is either an assumption, or one of the logical axioms or it's a sentence that can be deduced through modus ponens using previous sentences of the proof. I will write a number next to every sentence to denote the step in which I am. The first thing to do is to write down your assumptions: 1.$\lnot P_1\to P_2$ (assumption) What we want to do now is to find an axiom that looks like the assumption so we can use the modus ponens. Axiom 2 fits perfectly here so our second step can be 2.$(\lnot P_1\to P_2)\to((\lnot P_1\to\lnot P_2)\to P_1)$ (axiom 2) Using modus ponens on 1 and 2 we can derive the following 3.$(\lnot P_1\to\lnot P_2)\to P_1$ (modus ponens 1,2) The "right hand side" of the consequence we want to prove is there (namely $P_1$) but the "left hand side" is different. So what we want is some means to replace that $\lnot P_1\to\lnot P_2$ with $\lnot P_2$. Intuitively, what we need is to show that from $\lnot P_2$ we can derive $\lnot P_1\to\lnot P_2$. Looking at the axioms this is exactly what the first one gives: 4.$\lnot P_2\to(\lnot P_1\to\lnot P_2)$ (axiom 1) We have something of the form $A\to B$ and $B\to C$ and we want to prove $A\to C$. If we can do that then our prove will be complete. So now what we want to prove is $\{A\to B, B\to C\}\vdash A\to C$. Again let's begin with our assumptions: 1.$A\to B$ (assumption) 2.$B\to C$ (assumption) What we want to do is to create somewhere $A\to C$. Looking at the axioms that you have (and since $A\to B$ is an assumption) a good idea is to use the third axiom 3.$(A\to (B\to C))\to((A\to B)\to(A\to C))$ (axiom 3) Since $B\to C$ is an assumption we should use the first axiom to create that $(A\to (B\to C))$ we have 4.$(B\to C)\to(A\to (B\to C))$ (axiom 1) Now we have everything we want. We just need to apply modus ponens to derive our result: 5.$A\to(B\to C)$ (modus ponens 2,4) 6.$(A\to B)\to(A\to C)$ (modus ponens 3,5) 7.$A\to C$ (modus ponens 1,6) So writing it down a bit more formally we have: A. $\{A\to B, B\to C\}\vdash A\to C$: 1. $A\to B$ (assumption) 2. $B\to C$ (assumption) 3. $(A\to (B\to C))\to((A\to B)\to(A\to C))$ (axiom 3) 4. $(B\to C)\to(A\to (B\to C))$ (axiom 1) 5. $A\to(B\to C)$ (modus ponens 2,4) 6. $(A\to B)\to(A\to C)$ (modus ponens 3,5) 7. $A\to C$ (modus ponens 1,6) B. $\{\lnot P_1\to P_2\}\vdash\lnot P_2\to P_1$: 1. $\lnot P_1\to P_2$ (assumption) 2. $(\lnot P_1\to P_2)\to((\lnot P_1\to\lnot P_2)\to P_1)$ (axiom 2) 3. $(\lnot P_1\to\lnot P_2)\to P_1$ (modus ponens 1,2) 4. $\lnot P_2\to(\lnot P_1\to\lnot P_2)$ (axiom 1) 5. $\lnot P_2\to P_1$ (using A with 3,4) - This is small enough for a truth table. 1. First, write out the unmodified table for reference: P1 P2 P1->P2 0 0 1 0 1 0 1 0 1 1 1 1 2. Next, apply the negation from the antecedent: ~P1 P2 ~P1->P2 1 0 1 1 1 1 0 0 1 0 1 0 3. Then, apply the negation from the consequent: ~P2 P1 ~P2->P1 1 0 1 0 0 1 1 1 1 0 1 0 Both of the modified truth tables match, so they are logically equivalent. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9043378829956055, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/122090-generate-random-matrix-given-condition-number.html
# Thread: 1. ## Generate a random matrix given a condition number If I am given the condition number , how can i generate a random matrix with that condition number. The matrix can have any number of rows and columns. please help... 2. The "condition number" of a matrix, A, is defined as [tex]||A||||A^{-1}|| For symmetric matrices, this is equal to $\frac{\lambda_{max}}{\lambda_{min}}$ where $\lambda_{max}$ is the largest eigenvalue and $\lambda_{min}$ is the smallest. So the simplest thing to do to take a diagonal matrix with appropriate numbers on the diagonal. For example, if the condition number is to be 3/2, a matrix with that condition number is $\begin{bmatrix}3 & 0 \\ 0 & 2\end{bmatrix}$. If you wanted 3 by 3 or 4 by 4 matrices just put numbers between those on the diagonal. To get a non-diagonal matrix (so it looks like you've done more work!) take $A^{-1}DA$ of your diagonal matrix D by some invertible matrix A. 3. Thanks for the reply .. but condition number of D does not seem to be equal to condition number of . I took invertible matrix: A= [1 1; -1 2] and performed with D=.but i get the condition number as 1.5328 instead of 1.500. I cant make out why??
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9200536012649536, "perplexity_flag": "head"}
http://gilkalai.wordpress.com/2009/05/21/extremal-combinatorics-vi-the-frankl-wilson-theorem/?like=1&source=post_flair&_wpnonce=a082172354
Gil Kalai’s blog ## Extremal Combinatorics VI: The Frankl-Wilson Theorem Posted on May 21, 2009 by Rick Wilson The Frankl-Wilson theorem is a remarkable theorem with many amazing applications. It has several proofs, all based on linear algebra methods (also referred to as dimension arguments). The original proof is based on a careful study of incidence matrices for families of sets. Another proof by Alon, Babai and Suzuki applies the “polynomial method”. I will describe here one variant of the theorem that we will use in connection with the Borsuk Conjecture (It follows a paper by A. Nilli).  I will post separately about more general versions, the origin of the result, and various applications. One application about the maximum volume of spherical sets not containing a pair of orthogonal vectors is presented in the next post. Let $n=4p$ and suppose that $p$ is a prime. Theorem: Let $\cal F$ be a subset of $\{-1,1\}^n$ with the property that no two vectors in $\cal F$ are orthogonal. Then $|{\cal F}| \le 4({{n} \choose {0}}+{{n}\choose {1}}+\dots+{{n}\choose{p-1}})$. We will prove a slightly modified version which easily implies the Theorem as we stated it before: Let $\cal G$ be the set of vectors ($x_1,x_2,\dots,x_n$) in $\{-1,1\}^n$ such that $x_1=1$ and the number of ’1′ coordinates is even.  Let $\cal F$ be a subset of $\cal G$ with the property that no two vectors in $\cal F$ are orthogonal. Then $|{\cal F}| \le {{n} \choose {0}}+{{n}\choose {1}}+\dots+{{n}\choose{p-1}}$. ## The ad-hoc trick: Claim 1: Let $x,y\in {\cal G}$. If $<x,y>=0(mod~p)$ then $<x,y>=0$. This is a crucial part of the argument and as far as I can see the part that you need to replace entirely in other applications of the method. We need Claim 1′:  For every $x,y\in {\cal G}$, $<x,y>=0(mod~4)$. Given Claim 1′, if $<x,y>=0(mod~p)$ then $<x,y>=0(mod~4p)$. Since the first coordinate is 1 it is not possible that $y=-x$ and it follows that $<x,y>=0$. Proof (of Claim1′):  Suppose that the number of coordinates $i$ where $x_i=1$ and $y_i=1$ is $a$, the number of coordineates where $x_i=1$ and $y_i=-1$ is $b$, the number of coordineates where $x_i=-1$ and $y_i=1$ is $c$, and the number of coordineates where $x_i=-1$ and $y_1=-1$ is $d$. The inner product $<x,y>=a-b-c+d$. Now $a+b$ is even and also $a+c$ is even so $b+c$ is also even and $2b+2c$ is divisible by 4. $a+b+c+d=n$ is divisible by 4 and therefore so is $a-b-c+d=(a+b+c+d)-2(b+c)$. ## The algebra-puzzle trick Compute $(x-a)\times (x-b)\times(x-c)\times \dots \times (x-z)$. The answer is ZERO because one of the terms is (x-x). Magic:  underline the empty line with your mouse to see the solution. Define a family of polynomials in $n$ variables $x_1,x_2,\dots, x_n$, over the field with $p$ elements $F_p$ as follows. For every $y=(y_1,y_2,\dots,y_n)\in F$ define $P_y(x_1,x_2,\dots,x_n)=\prod_{k=1}^{p-1}(<x,y>-k)$. Here $x=(x_1,x_2\dots,x_n)$. Claim 2: For $x,y\in{\cal F}$, $x\ne y$ we have $P_y(x)=0$. Proof: Since $<x,y>\ne 0$ we must have that $<x,y>=k(mod~p)$ for some $k$, $1 \le k \le p-1$. Therefore one of the terms in the product defining $P_y(x)$ must vanish. (Remember, we work over the field with $p$ elements. Ahla! Claim 3: $P_y(y)\ne 0$. Proof: The inner product of a vector with itself $<y,y>=0$ in $F_p$.  This is important! Therefore $P_y(y)=(-1)^{p-1}(p-1)!\ne 0$. (It is a product of non-zero numbers modulo $p$. What is $(p-1)!$ modulo p? This is a result of a different Wilson.) Walla! Remark: Some people prefer to simply write $P_y(x)=<x,y>^{p-1} -1$ which gives a different less extravagant way to present the same thing. (And without the algebra-puzzle.) I still prefer the way we defined the polynomials because for other applications of the method, it generalizes more easily. ## The linear algebra (diagonal) trick Claim 4:  The polynomials $P_y(x):y\in {\cal F}$ are linearly independent. Proof:  Suppose that $\sum \{\lambda_yP_y(x):y \in {\cal F}\}=0$. Substitute in this sum $x=z$ for $z \in {\cal F}$. By Claim 2 we obtain that the outcome is $\lambda_zP_z(z)$. By claim 3 $P_z(z)\ne 0$ and therefore $\lambda_z=0$.  Sababa! Remark: It is worthwhile to remember a variation of this trick, the linear algebra upper-diagonal trick, where you conclude linear independence when you assume that there is some order on  your set of $y$‘s and that $P_y(z)=0$ for $y<z$. ## The multilinearization trick We now replace the polynomials $P_y(x)$ by new polynomials $\bar P_y(x)$ by applying the following rule: Replace each $x_i^{2k}$ by 1 and each $x_i^{2k+1}$ by $x_i$! Claim 5: The new polynomials are square-free, and they attain the same values for vectors with coordinates $\pm 1$. Proof: This is because when we have a variable $x$ that attains only the values +1 and -1 the value of $x^{2k}$ is 1, and the value of $x^{2k+1}$ is the same as that of $x$. Walla! Claim 6:  The polynomials $\bar P_y(x):y\in {\cal F}$ are linearly independent. Proof: The proof of claim 4 applies. Walla! ## The end of the proof: By claim 6, the size of ${\cal F}$ is at most the dimension of the vector space of square-free polynomials of degree $p-1$ in $n$ variables $x_1,x_2,\dots, x_n$.  This dimension is the number of sqauare-free monomials of degree $p-1$ in $n$ variables $x_1,\dots, x_n$ which is ${{n} \choose {0}}+{{n}\choose {1}}+\dots+{{n}\choose{p-1}}$. Sababa! ### Like this: This entry was posted in Combinatorics and tagged Peter Frankl, Richard Wilson. Bookmark the permalink. ### 6 Responses to Extremal Combinatorics VI: The Frankl-Wilson Theorem 1. Gil Kalai says: The reason the second version implies the first is as follows: If we flipp the value of the first coordinate we do not change inner products so the second version applies also to subsets of ${\cal G}'$ which is the set of all vectors with $x_1=-1$ having odd numbers of ’1′s. The proof only used the fact that the number of 1 coordinates is even and there are no antipodal vectors $x=-y$. Therefore, we can simply cover all $\pm 1$ vectors by 4 sets to which the second version applies. 2. Pingback: How Large can a Spherical Set Without Two Orthogonal Vectors Be? « Combinatorics and more 3. Anonymous says: Many thanks for the nice exposition! Here are two questions motivated by it. 1) How sharp the Frankl-Wilson estimate is? 2) What is the estimate for $n$ *not* of the form $4p$ with $p$ prime? 4. Gil Kalai says: Thanks! 1) If you look at the set of all the vectors with at most $p-1$ ‘-1′s or at most $p-1$ +1′s you get an example with no two orthogonal vectors, with about half the number in the theorem. 2) If n is even but not of the form we considered you can apply Frankl-Rodl’s theorem to get bounds of the form $(2-\epsilon)^n$ with worth value of $\epsilon$. (The Frankl Wilson theorem gives the correct value of $\epsilon$ in the four times prime case.) 5. Pingback: Borsuk’s Conjecture « Combinatorics and more 6. Pingback: Four Derandomization Problems « Combinatorics and more • ### Blogroll %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 108, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8839746713638306, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/131113/showing-f-1y-is-poincare-dual-to-f-operatornamevol/131196
Showing $[f^{-1}(y)]$ is Poincare dual to $f^*(\operatorname{vol})$. Let $f: N^n \to M^m$ be a smooth map between closed oriented manifolds. Then I'm trying to show that for almost all $y \in M$, the homology class $[f^{-1}(y)] \in H_{n-m}(N)$ is Poincare dual to $f^* \operatorname{vol}_M$, where $\operatorname{vol}_M$ is the volume form on $M$ (here I'm using the fact that for generic $y$, $f^{-1}(y)$ is a submanifold of dimension $n-m$). Unwinding the definitions, I need to show that if $\phi \in \Omega^{n-m}(N)$ is closed then $$\int_{f^{-1}(y)} i^* \phi = \int_N \phi \wedge f^* vol_M$$ where $i: f^{-1}(y) \to N$ is the inclusion. Now this is easy to see if, for example $N = M \times F$ and the map $f$ is just the projection, but I am having trouble proving this in general. My issue is that the left hand side appears dependent on $y$ while the right hand side doesn't. But I believe all such $f^{-1}(y)$ are homologous so the left hand side is really independent of $y$ but I don't know how to show it's equal to the right hand side. I'm sure this is really standard thing in differential topology but for some reason I haven't found it in my literature search. - 1 Answer Look at the Griffiths-Harris “Principles of Algebraic geometry” middle of the page 59, for the state and proof of following general and important result. General result: With the same assumption, one can show that, if $f$ is non-singular over a cycle $C \subset M,$ then with the proper orientation, the cycle $f^{-1}(C) \subset N$ is Poincare dual to the pull-back via $f$ of the Poincare dual of $C.$ Now, all you have to show is that the Poincare dual of a (generic) point $y \in M$ is the volume form. The rest, follows from the Poincare duality and pairing of closed differential forms of complementary degree in de Rham cohomology. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9510226249694824, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/48993-tangent-circle-question.html
# Thread: 1. ## tangent circle question The radius of the incircle of a triangle is 4 cm and the segments in to which one side is divided by the point of contact are 6cm and 8 cm , determine the other two sides of the triangle. Oh and yeah, R = 4cm and the other R is the name of the contact point..no relation..PLEASE I need help yeah another tangent Question.. Well, the only way I know to do this Question is by using heron's formula and then comparing the area found by 1/2*b*h,, of the sum of the three triangles that can be formed by joining the centre to the vertices. Is there an easier and less shabby way? Attached Thumbnails 2. Originally Posted by ice_syncer The radius of the incircle of a triangle is 4 cm and the segments in to which one side is divided by the point of contact are 6cm and 8 cm , determine the other two sides of the triangle. Oh and yeah, R = 4cm and the other R is the name of the contact point..no relation..PLEASE I need help yeah another tangent Question.. Well, the only way I know to do this Question is by using heron's formula and then comparing the area found by 1/2*b*h,, of the sum of the three triangles that can be formed by joining the centre to the vertices. Is there an easier and less shabby way? Apply Pythagoras theorem to triangle ORB. You know OR and BR. So you'll find OB. Then apply Pythagoras theorem to triangle OPB. You know OP and OB. So you'll find BP. And so on... I don't know if it's quicker than applying Heron's formula. 3. $\frac\beta{2}=\arctan\frac{4}{8}$ ( $\triangle ORB$) $\frac\gamma{2}=\arctan\frac{4}{6}$ ( $\triangle ORC$) $\Downarrow$ $\alpha=180°-\beta-\gamma$ $\frac{8+6}{\sin A} = \frac{b}{\sin\beta}$ (Law of sines; same for c)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391010999679565, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/67029/extension-theory-with-bump-function/67369
## Extension theory with bump function ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $B_t(0)$ denote the $n$ dimensional ball of radius $t$ centered at the origin. Does there exist a $\phi\in C(\mathbb{R}^n)$ function with the properties: $\phi (x) = \begin{cases} 1&x\in B_r(0) \\ 0&x\not\in B_{r+3}(0) \end{cases}$ and for any real-valued function $f\in \mathcal{H}^\tau(\mathbb{R^n})$ ($\tau\in\mathbb{R}$, $\tau>d/2$) we have $\left\|\phi(\cdot) f(\cdot)\right\|_{\mathcal{H}^\tau(A_1)}\leq C\left\|f\right\|_{\mathcal{H}^\tau(A_2)}$ where $C$ is a constant independent of $f$, $A_1 = B_{r+2}(0)\setminus B_{r+1}(0)$ and $A_2 = B_{r+3}(0)\setminus B_{r}(0)$. This has a similar feel to extension theory results if we think of $\phi$ as an extension operator which preserves $f$ on $B_r(0)$. - what is $\mathcal{H}^\tau$? – Anton Petrunin Jun 6 2011 at 15:25 This is the Sobolev space $H^\tau=W^{\tau,2}$. en.wikipedia.org/wiki/Sobolev_space – alext87 Jun 8 2011 at 13:24 I think I am missing something obvious, but why can you not pick any $\phi \in C^2$ and set $C = C(\vert\vert \phi\vert\vert_{C^2(B_{r+2}(0))})$, i.e. pull out the derivatives of $\phi$ in $L^{\infty}$? – Yakov Shlapentokh-Rothman Jun 8 2011 at 15:53 In the above comment, the 2 should be replaced by $s$ where $s > \tau$. – Yakov Shlapentokh-Rothman Jun 8 2011 at 15:55 Sure you can do this for integer $\tau$ just by using Leibniz rule. In fact the details are quite simple. I was more interested in the fractional sobolev space case where the details are escaping me. – alext87 Jun 8 2011 at 16:27 show 3 more comments ## 2 Answers Short answer: yes. Let $\psi_\epsilon(x):=\frac{1}{\epsilon^n}\exp{\epsilon^2/(\epsilon^2-|x|^2)}$ for $|x|<\epsilon$, and $\psi_{\epsilon}(x)=0$ for $|x|\geq \epsilon$. Set $\epsilon=2$, and define $\phi$ is the convolution of $C\phi_{\epsilon}$ with the characteristic function of $B_{r+3/2}(0)$, that is, $\phi(x):= C\psi_\epsilon(x)* \chi_{B_{r+3/2}(0)}$. Here $C$ is a normalizing constant (this may not be needed, but I haven't checked). This yields a smooth cut-off function which is 1 in the ball $B_{r+1}(0)$, and zero outside $B_{r+2}(x)$. To see this does the trick, one can use a localization theorem, for example, Theorem 3.20 in 'Strongly Elliptic Systems and Boundary Integral Equations' by W. McLean. This theorem states: 'Suppose that $\phi \in C^r_{comp}(\mathbb{R}^n)$ for some integer $r\geq 1$, and let $|s|\leq r$. If $u\in H^s(\Omega)$ then $\phi u \in H^s(\Omega)$, and $||\phi u||_{H^s(\Omega)} \leq C_r||\phi||_{W^{r,\infty}(\mathbb{R}^n)}Q_u$ where $Q_u=||u||_{H^s(\Omega). }$ (Apologies, I encountered trouble while trying to typeset the LaTeX here). The same result holds with $H^s(\Omega)$ replaced with $\tilde{H}^s(\Omega)$.' The proof proceeds using $\Omega = A_2$, and then either by (a) considering the situation for $s=r$, using duality to see it holds for $s=-r$, and the intermediate $s$ by interpolation. This is suggested by Yakov above. or (b) by examining $\hat{\phi u}$ and using Peetre's inequality. Since the constructed $\phi \in C^\infty$ and has compact support, it will satisfy the inequality you seek. In my comment I asked whether you wanted a $\phi$ of minimal regularity (relative to $\tau$); my construction works but may be overkill. - Is approach (b) supposed to work for $\Omega \neq \mathbb{R}^n$? It is not clear to me how the Fourier transform will be useful unless $\Omega = \mathbb{R}^n$, since that is the only case in which the Sobolev spaces are defined directly in terms of the Fourier transform. – Yakov Shlapentokh-Rothman Jun 15 2011 at 13:50 Yakov, the answer is 'yes'. One defines $H^s(\Omega)$ for any open set as the distributions which are the restrictions to $\Omega$ of some $U\in H^s(\mathbb{R}^n)$,with the norm being the induced norm. This provides the continuous inclusion of $H^s(\Omega) \subset W^{s,2}(\Omega)$ for $s\geq 0$. [To obtain set equality, one needs $\Omega$ to permit an extension operator. Set equality holds for any non-empty open $\Omega$, and $s$ the negative integers.] One then uses Peetre's inequality on the object $U$. MacLean's book has good discussion. – Nilima Nigam Jun 15 2011 at 16:21 Thanks, that makes sense. – Yakov Shlapentokh-Rothman Jun 15 2011 at 17:52 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I believe that the fractional Sobolev spaces can be defined as a complex interpolation space between the integer Sobolev spaces (see this http://www.scribd.com/doc/45316527/Sobolev-Spaces-2ed-Robert-a-Adams-John-J-F-Fournier for example). As noted in the comments, your question is easily seen to hold on the integer valued spaces. Then we can interpolate to get the result for the fractional spaces. - This approach works. Thanks. The answer was just a little vague in comparison to Nilima Nigma's response. – alext87 Jun 15 2011 at 7:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9002459049224854, "perplexity_flag": "head"}
http://jaswindercbse.nuvvo.com/lesson/11786-probability
# LearnHub ## probability Author jaswinder_singh_saini Probability, or chance, is a way of expressing knowledge or belief that an event will occur or has occurred. In mathematics the concept has been given an exact meaning in probability theory, that is used extensively in such areas of study as mathematics, statistics, finance, gambling, science, and philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems. ## Interpretations The word probability does not have a consistent direct definition. In fact, there are two broad categories of probability interpretations, whose adherents possess different (and sometimes conflicting) views about the fundamental nature of probability: 1. Frequentists talk about probabilities only when dealing with experiments that are random and well-defined. The probability of a random event denotes the relative frequency of occurrence of an experiment's outcome, when repeating the experiment. Frequentists consider probability to be the relative frequency "in the long run" of outcomes. 2. Bayesians, however, assign probabilities to any statement whatsoever, even when no random process is involved. Probability, for a Bayesian, is a way to represent an individual's degree of belief in a statement, given the evidence. ## Etymology The word probability derives from probity, a measure of the authority of a witness in a legal case in Europe, and often correlated with the witness's nobility. In a sense, this differs much from the modern meaning of probability, which, in contrast, is used as a measure of the weight of empirical evidence, and is arrived at from inductive reasoning and statistical inference. ## History The scientific study of probability is a modern development. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions of use in those problems only arose much later. According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances." Aside from some elementary considerations made by Girolamo Cardano in the 16th century, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. See Ian Hacking's The Emergence of Probability for a history of the early development of the very concept of mathematical probability. The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that there are certain assignable limits within which all errors may be supposed to fall; continuous errors are discussed and a probability curve is given. Pierre-Simon Laplace (1774) made the first attempt to deduce a rule for the combination of observations from the principles of the theory of probabilities. He represented the law of probability of errors by a curve y = φ(x), x being any error and y its probability, and laid down three properties of this curve: 1. it is symmetric as to the y-axis; 2. the x-axis is an asymptote, the probability of the error $\infty$ being 0; 3. the area enclosed is 1, it being certain that an error exists. He also gave (1781) a formula for the law of facility of error (a term due to Lagrange, 1774), but one which led to unmanageable equations. Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors. The method of least squares is due to Adrien-Marie Legendre (1805), who introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets). In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error, $\phi(x) = ce^{-h^2 x^2},$ h being a constant depending on precision of observation, and c a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof which seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W. F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for r, the probable error of a single observation, is well known. In the nineteenth century authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion, and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory. On the geometric side (see integral geometry) contributors to The Educational Times were influential (Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin). ## Mathematical treatment In mathematics, a probability of an event A is represented by a real number in the range from 0 to 1 and written as P(A), p(A) or Pr(A). An impossible event has a probability of 0, and a certain event has a probability of 1. However, the converses are not always true: probability 0 events are not always impossible, nor probability 1 events certain. The rather subtle distinction between "certain" and "probability 1" is treated at greater length in the article on "almost surely". The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring); its probability is given by P(not A) = 1 - P(A). As an example, the chance of not rolling a six on a six-sided die is 1 - (chance of rolling a six) = ${1} - \tfrac{1}{6} = \tfrac{5}{6}$ . See Complementary event for a more complete treatment. If both the events A and B occur on a single performance of an experiment this is called the intersection or joint probability of A and B, denoted as $P(A \cap B)$ . If two events, A and B are independent then the joint probability is $P(A \mbox{ and }B) = P(A \cap B) = P(A) P(B),\,$ for example, if two coins are flipped the chance of both being heads is $\tfrac{1}{2}\times\tfrac{1}{2} = \tfrac{1}{4}$ . If either event A or event B or both events occur on a single performance of an experiment this is called the union of the events A and B denoted as $P(A \cup B)$ . If two events are mutually exclusive then the probability of either occurring is $P(A\mbox{ or }B) = P(A \cup B)= P(A) + P(B).$ For example, the chance of rolling a 1 or 2 on a six-sided die is $P(1\mbox{ or }2) = P(1) + P(2) = \tfrac{1}{6} + \tfrac{1}{6} = \tfrac{1}{3}$ . If the events are not mutually exclusive then $\mathrm{P}\left(A \hbox{ or } B\right)=\mathrm{P}\left(A\right)+\mathrm{P}\left(B\right)-\mathrm{P}\left(A \mbox{ and } B\right)$. For example, when drawing a single card at random from a regular deck of cards, the chance of getting a heart or a face card (J,Q,K) (or one that is both) is $\tfrac{13}{52} + \tfrac{12}{52} - \tfrac{3}{52} = \tfrac{11}{26}$ , because of the 52 cards of a deck 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards" but should only be counted once. Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written P(A|B), and is read "the probability of A, given B". It is defined by $P(A \mid B) = \frac{P(A \cap B)}{P(B)}.\,$ If P(B) = 0 then $P(A \mid B)$ is undefined. Summary of probabilities Event Probability A $P(A)\in[0,1]\,$ not A $P(A')=1-P(A)\,$ A or B $\begin{align} P(A\cup B) & = P(A)+P(B)-P(A\cap B) \ & = P(A)+P(B) \qquad\mbox{if A and B are mutually exclusive}\ \end{align}$ A and B $\begin{align} P(A\cap B) & = P(A|B)P(B) \ & = P(A)P(B) \qquad\mbox{if A and B are independent}\ \end{align}$ A given B $P(A \mid B) = \frac{P(A \cap B)}{P(B)}\,$ ## Theory Like other theories, the theory of probability is a representation of probabilistic concepts in formal termsâ€"that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are then interpreted or translated back into the problem domain. There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see probability space), sets are interpreted as events and probability itself as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details. There are other methods for quantifying uncertainty, such as the Dempster-Shafer theory or possibility theory, but those are essentially different and not compatible with the laws of probability as they are usually understood. ## Applications Two major applications of probability theory in everyday life are in risk assessment and in trade on commodity markets. Governments typically apply probabilistic methods in environmental regulation where it is called "pathway analysis", often measuring well-being using methods that are stochastic in nature, and choosing projects to undertake based on statistical analyses of their probable effect on the population as a whole. A good example is the effect of the perceived probability of any widespread Middle East conflict on oil prices - which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely vs. less likely sends prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are not assessed independently nor necessarily very rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict. It can reasonably be said that the discovery of rigorous methods to assess and combine probability assessments has had a profound effect on modern society. Accordingly, it may be of some importance to most citizens to understand how odds and probability assessments are made, and how they contribute to reputations and to decisions, especially in a democracy. Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, utilize reliability theory in the design of the product in order to reduce the probability of failure. The probability of failure may be closely associated with the product's warranty. ## Relation to randomness In a deterministic universe, based on Newtonian concepts, there is no probability if all conditions are known. In the case of a roulette wheel, if the force of the hand and the period of that force are known, then the number on which the ball will stop would be a certainty. Of course, this also assumes knowledge of inertia and friction of the wheel, weight, smoothness and roundness of the ball, variations in hand speed during the turning and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analysing the pattern of outcomes of repeated rolls of roulette wheel. Physicists face the same situation in kinetic theory of gases, where the system, while deterministic in principle, is so complex (with the number of molecules typically the order of magnitude of Avogadro constant (6·1023) that only statistical description of its properties is feasible. A revolutionary discovery of 20th century physics was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of quantum mechanics. The wave function itself evolves deterministically as long as no observation is made, but, according to the prevailing Copenhagen interpretation, the randomness caused by the wave function collapsing when an observation is made, is fundamental. This means that probability theory is required to describe nature. Others never came to terms with the loss of determinism. Albert Einstein famously remarked in a letter to Max Born: Jedenfalls bin ich überzeugt, daß der Alte nicht würfelt. (I am convinced that God does not play dice). Although alternative viewpoints exist, such as that of quantum decoherence being the cause of an apparent random collapse, at present there is a firm consensus among the physicists that probability theory is necessary to describe quantum phenomena. ## Top Related Content • Content Type Lesson #### list of formulas hey friends there are some some .pdf files of  maths formulas,so that you  can just go and check them,may be this will help you . View More • Content Type Lesson #### Learn the Five Secrets to Creating Success in Your Life <font face=Arial size=2>There are only five secret strategies needed to have success in your life, why have you not perfected them? The secrets to success in any venture are about choices. Yo... View More • Content Type Lesson #### laws in Electromagnetism Laws in electromagnetism     Law of conservation of electric charge- The algebraic sum of positive and negative charges in an isolated system remains constant. Coulomb’s law- The electro... View More • Content Type Lesson #### A Simple Success Lesson From a Genius <font face=Arial size=2>Here’s some unexpectedly good advice from the greatest composer who ever lived. <font face=Arial size=2>“Music is the only thing I do well. Everything else I do ... View More 1. ### jaswinder_singh_sainisaid – Wed, 15 Apr 2009 06:55:38 -0000 ( Flag Edit Link ) hiiii Actions Author jaswinder_singh_saini Authority 209 Vote Current Rating 0 Rate Up Rate Down No Votes Add Comment 2. ### ankushb_2004said – Fri, 17 Apr 2009 16:45:57 -0000 ( Flag Edit Link ) Really Good And Helpful..!! Thank You..!! Actions Author ankushb_2004 Authority 46 Vote Current Rating 0 Rate Up Rate Down No Votes #### Post Comments Add Comment Textile is Enabled (View Reference) Rating: 1 | 0 ## Also Shared In… This work is public domain. ### Join Nuvvo To get the most that Nuvvo has to offer join today; it’s free! ### Quote of The Moment Knowing is not enough; we must apply. Willing is not enough; we must do. Author — Johann Wolfgang von Goethe
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9261865019798279, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Algebraic_number
Algebraic number In mathematics, an algebraic number is a number that is a root of a non-zero polynomial in one variable with rational coefficients (or equivalently—by clearing denominators—with integer coefficients). Numbers such as π that are not algebraic are said to be transcendental; almost all real and complex numbers are transcendental. (Here "almost all" has the sense "all but a countable set"; see Properties below.) Examples • The rational numbers, expressed as the quotient of two integers a and b, b not equal to zero, satisfy the above definition because $x = a/b$ is the root of $bx-a$.[1] • The quadratic surds (irrational roots of a quadratic polynomial $ax^2 + bx + c$ with integer coefficients $a$, $b$, and $c$) are algebraic numbers. If the quadratic polynomial is monic $(a = 1)$ then the roots are quadratic integers. • The constructible numbers are those numbers that can be constructed from a given unit length using straightedge and compass and their opposites. These include all quadratic surds, all rational numbers, and all numbers that can be formed from these using the basic arithmetic operations and the extraction of square roots. (Note that by designating cardinal directions for 1, −1, $i$, and $-i$, complex numbers such as $3+\sqrt{2}i$ are considered constructible.) • Any expression formed using any combination of the basic arithmetic operations and extraction of nth roots gives an algebraic number. • Polynomial roots that cannot be expressed in terms of the basic arithmetic operations and extraction of nth roots (such as the roots of $x^5 - x + 1$). This happens with many, but not all, polynomials of degree 5 or higher. • Gaussian integers: those complex numbers $a+bi$ where both $a$ and $b$ are integers are also quadratic integers. • Trigonometric functions of rational multiples of $\pi$ (except when undefined). For example, each of $\cos(\pi/7)$, $\cos(3\pi/7)$, $\cos(5\pi/7)$ satisfies $8x^3 - 4x^2 - 4x + 1 = 0$. This polynomial is irreducible over the rationals, and so these three cosines are conjugate algebraic numbers. Likewise, $\tan(3\pi/16)$, $\tan(7\pi/16)$, $\tan(11\pi/16)$, $\tan(15\pi/16)$ all satisfy the irreducible polynomial $x^4 - 4x^3 - 6x^2 + 4x + 1$, and so are conjugate algebraic integers. • Some irrational numbers are algebraic and some are not: • The numbers $\sqrt{2}$ and $\sqrt[3]{3}/2$ are algebraic since they are roots of polynomials $x^2 - 2$ and $8x^3 - 3$, respectively. • The golden ratio $\phi$ is algebraic since it is a root of the polynomial $x^2 - x - 1$. • The numbers $\pi$ and $e$ are not algebraic numbers (see the Lindemann–Weierstrass theorem);[2] hence they are transcendental. Properties Algebraic numbers on the complex plane colored by degree. (red=1, green=2, blue=3, yellow=4) • The set of algebraic numbers is countable (enumerable).[3] • Hence, the set of algebraic numbers has Lebesgue measure zero (as a subset of the complex numbers), i.e. "almost all" complex numbers are not algebraic. • Given an algebraic number, there is a unique monic polynomial (with rational coefficients) of least degree that has the number as a root. This polynomial is called its minimal polynomial. If its minimal polynomial has degree $n$, then the algebraic number is said to be of degree $n$. An algebraic number of degree 1 is a rational number. A real algebraic number of degree 2 is a quadratic irrational. • All algebraic numbers are computable and therefore definable and arithmetical. • The set of real algebraic numbers is linearly ordered, countable, densely ordered, and without first or last element, so is order-isomorphic to the set of rational numbers. The field of algebraic numbers Algebraic numbers colored by degree (blue=4, cyan=3, red=2, green=1). The unit circle in black. The sum, difference, product and quotient of two algebraic numbers is again algebraic (this fact can be demonstrated using the resultant), and the algebraic numbers therefore form a field, sometimes denoted by A (which may also denote the adele ring) or Q. Every root of a polynomial equation whose coefficients are algebraic numbers is again algebraic. This can be rephrased by saying that the field of algebraic numbers is algebraically closed. In fact, it is the smallest algebraically closed field containing the rationals, and is therefore called the algebraic closure of the rationals. Related fields Numbers defined by radicals All numbers that can be obtained from the integers using a finite number of integer additions, subtractions, multiplications, divisions, and taking nth roots (where n is a positive integer) are algebraic. The converse, however, is not true: there are algebraic numbers that cannot be obtained in this manner. All of these numbers are solutions to polynomials of degree ≥5. This is a result of Galois theory (see Quintic equations and the Abel–Ruffini theorem). An example of such a number is the unique real root of the polynomial x5 − x − 1 (which is approximately 1.167304). Closed-form number Main article: Closed-form number Algebraic numbers are all numbers that can be defined explicitly or implicitly in terms of polynomials, starting from the rational numbers. One may generalize this to "closed-form numbers", which may be defined in various ways. Most broadly, all numbers that can be defined explicitly or implicitly in terms of polynomials, exponentials, and logarithms are called "elementary numbers", and these include the algebraic numbers, plus some transcendental numbers. Most narrowly, one may consider numbers explicitly defined in terms of polynomials, exponentials, and logarithms – this does not include algebraic numbers, but does include some simple transcendental numbers such as e or log(2). Algebraic integers Main article: Algebraic integer Algebraic numbers colored by leading coefficient (red signifies 1 for an algebraic integer). An algebraic integer is an algebraic number that is a root of a polynomial with integer coefficients with leading coefficient 1 (a monic polynomial). Examples of algebraic integers are 5 + 13√2, 2 − 6i, and 1⁄2(1 + i√3). (Note, therefore, that the algebraic integers constitute a proper superset of the integers, as the latter are the roots of monic polynomials x − k for all k ∈ Z.) The sum, difference and product of algebraic integers are again algebraic integers, which means that the algebraic integers form a ring. The name algebraic integer comes from the fact that the only rational numbers that are algebraic integers are the integers, and because the algebraic integers in any number field are in many ways analogous to the integers. If K is a number field, its ring of integers is the subring of algebraic integers in K, and is frequently denoted as OK. These are the prototypical examples of Dedekind domains. Notes 1. Some of the following examples come from Hardy and Wright 1972:159–160 and pp. 178–179 2. Hardy and Wright 1972:160 References • Artin, Michael (1991), Algebra, Prentice Hall, ISBN 0-13-004763-5, MR 1129886 • Ireland, Kenneth; Rosen, Michael (1990), A Classical Introduction to Modern Number Theory, Graduate Texts in Mathematics 84 (Second ed.), Berlin, New York: Springer-Verlag, ISBN 0-387-97329-X, MR 1070716 • G.H. Hardy and E.M. Wright 1978, 2000 (with general index) An Introduction to the Theory of Numbers: 5th Edition, Clarendon Press, Oxford UK, ISBN 0-19-853171-0 • Lang, Serge (2002), Algebra, Graduate Texts in Mathematics 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR1878556 • Øystein Ore 1948, 1988, Number Theory and Its History, Dover Publications, Inc. New York, ISBN 0-486-65620-9 (pbk.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 43, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8884193301200867, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Problem_of_Apollonius&oldid=10372
# Problem of Apollonius ### From Math Images Revision as of 13:35, 4 November 2009 by AnnaP (Talk | contribs) This an example of a fractal that can be created by repeatedly solving the Problem of Apollonius. Fields: Geometry and Fractals Created By: Paul Nylander # Basic Description Image by: Wikipedia The problem of Apollonius involves trying to find a circle that is tangent to three objects: points, lines, or circles in a plane. The most famous of these is the case involving three different circles in a plane, as seen in the picture to the left. The given three circles are in red, green, and blue, while the solution circle is in black. Apollonius of Perga posed and solved this problem in his work called Tangencies. Sadly, Tangencies has been lost, and only a report of his work by Pappus of Alexandria is left. Since then, other mathematicians, such as Isaac Newton and Descartes, have been able to recreate his results and discover new ways of solving this interesting problem. Click to stop animation. The problem usually has eight different solution circles that exist that are tangent to the given three circles in a plane. The given circles must not be tangent to each other, overlapping, or contained within one another for all eight solutions to exist. Given three points, the problem only has one solution. In the cases of one line and two points; two lines and one point; and one circle and two points, the problem has two solutions. Four solutions exist for the cases of three lines; one circle, one line, and one point; and two circles and one point. There are eight solutions for the cases of two circles and one line; and one circle and two lines, in addition to the three circle problem. # A More Mathematical Explanation There are many different ways of solving the problem of Apollonius. The few that are easiest to understand include using an algebraic method or an inverse geometry method. ### Algebraic Method This method only uses math up to the level of understanding quadratic equations. We will proceed by setting up a system of quadratic equations and solving for the radius, r, of the unknown circle. We start by labeling the center of each of the given circles $(x$1$,y$1$)$, $(x$2$,y$2$)$, and $(x$3$,y$3$)$. We will call the center of the unknown circle $(x,y)$. $r$1,$r$2, and $r$3 are the different radii of each of the given circles. From this we are able to write our equations: • $(x - x$1$)^2 + (y - y$1$)^2 = (r \pm r$1$)^2$ • $(x - x$2$)^2 + (y - y$2$)^2 = (r \pm r$2$)^2$ • $(x - x$3$)^2 + (y - y$3$)^2 = (r \pm r$3$)^2$ Next we are able to expand each of the equations to see better how they can relate to each other. Expanding gives us: • $(x^2+y^2-r^2)-2xx$1$-2yy$1$\pm2rr$1$+(x$1$^2+y$1$^2-r$1$^2)=0$ • $(x^2+y^2-r^2)-2xx$2$-2yy$2$\pm2rr$2$+(x$2$^2+y$2$^2-r$2$^2)=0$ • $(x^2+y^2-r^2)-2xx$3$-2yy$3$\pm2rr$3$+(x$3$^2+y$3$^2-r$3$^2)=0$ We can now look at the equations and see how we can subtract them from each other. So we will take the second and third equation minus the first equation. Second minus first gives us: • $2(x_1-x_2)x+2(y_1-y_2)y+2(\pm r_1 \pm r_2)r=(x_1^2+y_1^2-r_1^2)-(x_2^2+y_2^2-r_2^2)$ Third minus first gives us: • $2(x_1-x_3)x+2(y_1-y_3)y+2(\pm r_1 \pm r_3)r=(x_1^2+y_1^2-r_1^2)-(x_3^2+y_3^2-r_3^2)$ For the sake of simplicity, we'll define some new variables. Let $a_2=2(x_1-x_2)$; $b_2=2(y_1-y_2)$  ; $c_2=2(\pm r_1 \pm r_2)$ ; $d_2=(x_1^2+y_1^2-r_1^2)-(x_2^2+y_2^2-r_2^2)$ $a_3=2(x_1-x_3)$; $b_3=2(y_1-y_3)$  ; $c_3=2(\pm r_1 \pm r_3)$ ; $d_2=(x_1^2+y_1^2-r_1^2)-(x_3^2+y_3^2-r_3^2)$ Now our two equations can be written as $a_2 x+b_2 y +c_2 r=d_2$ $a_3 x+b_3 y +c_3 r=d_3$ Since this is a simple linear system of equations, we can solve it for x and y in terms of r. [click to show algebra] [click to hide algebra] Solving the first equation for x: $a_2x=d_2-c_2r-b_2y \rightarrow x=\frac{d_2-c_2 r -b_2 y}{a_2}$ Substituting that in to the second equation allows us to find y in terms of known values and r: $a_3\left(\frac{d_2-c_2 r -b_2 y}{a_2} \right)+b_3 y +c_3 r =d_3$ $a_3 d_2-a_3c_2 r -a_3 b_2 y+a_2 b_3 y +a_2 c_3 r =d_3 a_2$ $y(a_2 b_3-a_3 b_2)=a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r$ • $y=\frac{a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r}{a_2 b_3-a_3 b_2}$. Rather than substituting back in to find x, it is actually simpler to go through the same process we used to find y to get x in terms of known values and r. First, we solve the first equation for y. $b_2y=d_2-c_2r-a_2x \rightarrow y=\frac{d_2-c_2 r -a_2 x}{b_2}$ Plugging into the second equation gives us $a_3 x+b_3\left(\frac{d_2-c_2 r -a_2 x}{b_2}\right) +c_3 r =d_3$ $a_3 b_2 x+b_3 d_2 -b_3 c_2 r-a_2 b_3 x +b_2 c_3 r=b_2 d_3$ $x(a_3 b_2-a_2 b_3)=b_2 d_3-b_3 d_2 +(b_3 c_2-b_2 c_3) r$ • $x=\frac{b_2 d_3-b_3 d_2 +(b_3 c_2-b_2 c_3) r}{a_3 b_2-a_2 b_3}$. With $x=\frac{b_2 d_3-b_3 d_2 +(b_2 c_3-b_3 c_2) r}{a_3 b_2-a_2 b_3}$ and $y=\frac{a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r}{a_2 b_3-a_3 b_2}$, we plug in values for the a's, b's and c's (which we get from the original information about the centers and radii of the three circles) to calculate x and y. Using our very first equations for the circles, we can then solve for r. [click to show example] [click to hide example] Let's pick three circles and find the mutually tangent ones. Let's choose some coordinates and radii for our three circles. Take $(x_1,y_1,r_1)=(0,3,2)$, $(x_2,y_2,r_2)=(-2,-2,1)$ and $(x_3,y_3,r_3)=(3,-3,3)$. These three circles are shown below. Now we want to calculate the a's, b's, and d's first. $a_2=2(x_1-x_2)=2(0--2)=4$ $a_3=2(x_1-x_3)=2(0-3)=-6$ $b_2=2(y_1-y_2)=2(3--2)=10$ $b_3=2(y_1-y_3)=2(3--3)=12$ $d_2=(x_1^2+y_1^2-r_1^2)-(x_2^2+y_2^2-r_2^2) =(0^2+3^2-2^2)-(2^2+2^2-1^2)=(9-4)-(4+4-1)=-2$ $d_3=(x_1^2+y_1^2-r_1^2)-(x_3^2+y_3^2-r_3^2) =(0^2+3^2-2^2)-(3^2+3^2-3^2)=(9-4)-(9+9-9)=-4$ Calculating the c terms requires a bit more thought since there are the $\pm$ signs. The choice of these signs determines which circle we are solving for. We simply must be consistent in all of our applications of signs for a given r. For the first example, let's simply take all of the plus signs. Then $c_2=2(r_1+r_2)=2(2+1)=6$ $c_3=2(r_1+r_3)=2(2+3)=10$. Now we can calculate x and y for this first circle. $x=\frac{b_2 d_3-b_3 d_2 +(b_3 c_2-b_2 c_3) r}{a_3 b_2-a_2 b_3}=\frac{(10)(-4)-(12)(-2)+(12(6)-10(10))r}{-6(10)-4(12)}=\frac{-16-(28)r}{-108}$ $x=\frac{1}{27}(4+7r)$. $y=\frac{a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r}{a_2 b_3-a_3 b_2}=\frac{(4)(-4)-(-6)(2)+ (-6(6)-4(10))r}{12(4)-(10)(-6)}=\frac{-28-(76)r}{108}$ $y=\frac{-7-19r}{27}$. Now we can return to one of our first equations to find r. $(x-x_1)^2+(y-y_1)^2=(r+r_1)^2$ Note that here the sign of $r_1$ is positive. That is because we took the positive sign when we solved for the c values. Plugging in values, we get $\left(\frac{1}{27}(4+7r)\right)^2+\left(\frac{-7-19r}{27}-3 \right)^2=(r+2)^2$ This equation is quadratic in r, so it can be solved using the quadratic formula (or a graphing calculator, if you prefer). When the dust clears, we get $r\approx 4.729$ (the other value that comes out of the quadratic formula does not work when plotted). Now we can plot this circle with center $\left(\frac{1}{27}(4+7(4.729)),\frac{-7-19(4.729)}{27} \right)$. It is shown below in red, with the original circles in black. We can see that it is indeed tangent to the three original circles! This process can be repeated choosing different signs for the different r values in the c coefficients to find the other seven circles. # Apollonian Gasket [Show] Click to stop animation. [Hide] The Apollonian gasket is an example of one of the earliest studied fractals and was first constructed by Gottfried Leibniz. It can be constructed by solving the problem of Apollonius iteratively. It was a precursor to Sierpinski's Triangle, and in a special case, it forms Ford Circles. Click to stop animation. Constructing the gasket begins with three mutually tangent circles. By solving this case of the problem of Apollonius we know that there are two other circles that are tangent to the three given circles. We now have five circles from which to start again. Repeat the process with two of the original circles and one of the newly generated circles. Again, by solving Apollonius' problem we can find two circles that are tangent to this new set of three circles. Although, we already know one of the two solutions for this set of three circles; it is the other of the three circles that we started with. Repeating this process over and over again with each set of three mutually tangent circles will create the Apollonian gasket. # References Math Pages, Apollonius' Tangency Problem MathWorld, Apollonius' Problem # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 91, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280362129211426, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3971178
Physics Forums ## Contravariant and covariant vectors transformations Hi all, I am new to General Relativity and I started with General Relativity Course on Youtube posted by Stanford (Leonard Susskind's lectures on GR). So first thing to understand is transformation of covariant and contravariant vectors. Before I can understand a transformation, I would like to know how do they look like. I mean what is the difference between covariant and contravariant vectors? And what is their role in understanding GR? In addition, I would like to know what is the application of Einstein's Field Equations? I mean, can I get one problem where they are actually used? Thanks! Joe W. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Contravariant or "tangent" vectors lie parallel (tangent) to the axis in question. The $e_x$ vector lies parallel to the x-axis, for example. Covariant or "cotangent" vectors lie perpendicular to all the other axes. In 3D, take the plane defined by the y and z axes, and the vector $e^x$ is perpendicular to that plane. In cartesian coordinates, covariant and contravariant vectors aren't different, but in more general, curvilinear coordinates or non-orthogonal frames and curved spaces, the two types of vectors become different and have to be kept track of. One of the main concepts that is introduced in GR is the invariance of physical laws under arbitrary coordinate transformations. This is not actually unique to GR, but it commonly comes into play here because the concept of spacetime allows for even time derivatives to be treated under this theory. The idea is as follows: Often, the position vector is just called $x =x^0 e_0 + x^1 e_1 + x^2 e_2 + x^3 e_3$. We don't say what kind of coordinate system this is (whether it's cartesian, spherical, or something else). But in general, if we want to change coordinates, we introduce a new position vector $x' = f(x)$, where $f$ is some function. For instance, converting between cartesian and polar coordinates, you might do something like $$x' = \sqrt{(x^1)^2 + (x^2)^2} e_1 + \arctan \frac{x^2}{x^1} e_2 = r e_1 + \theta e_2$$ This is a simple example of a 2D transformation. At any rate, an arbitrary transformation has a Jacobian, which relates the partial derivatives of the new coordinates to the old coordinates: the Jacobian $\underline f$ is given by $\underline f(a) = a \cdot \nabla f(x)$, where $a$ is some vector to be transformed. The vector $\underline f(a) = a'$ gives the transformation of a tangent vector. This is usually written in index notation as $$a'^i = a^j \frac{\partial x'^i}{\partial x^j}$$ The prototypical example of a tangent vector is the four-velocity $u$, from which we get the transformation law $u' = \underline f(u)$. On the other hand, there are cotangent space objects, one of which is the derivative $\nabla$. One can prove that $\overline f(\nabla') = \nabla$, where the overbar denotes the transpose, and this in general gives the transformation law for objects in the cotangent space. So you can see that under arbitrary coordinate transformation laws, we have these two different kinds of objects--covariant (cotangent) vectors and contravariant (tangent) vectors. In general, one converts between the two using the metric of the space. There is an operator $\underline g(a)$ that takes a tangent vector $a$ and converts it into a cotangent vector, and the inverse does the opposite. This gives the flexibility to work with vectors in the most convenient space as appropriate. As for the Einstein equations, usually they're used to get a solution for the metric and from there various results are derived. The different time dilation on the Earth compared to satellites in orbit, for instance, is calculated knowing that the Earth yields a certain solution to the Einstein equations in its immediate neighborhood and how that affects the perception of time at various heights. Blog Entries: 3 Recognitions: Gold Member I assume you know that a vector belongs in a vector space, and that a vector space can be characterised by a set of basis vectors. It is an axiom that every point on curve on a manifold has two vector spaces, one has a basis formed by the tangent vectors, and the other has a basis of 1-forms ( also called covectors). The vector spaces are connected by a metric ( if there is one), so any vector can be written in either basis. Contravariant vectors are written in the tangent space basis and covariant vectors in the 1-form basis. The transformation is linear, vμ = gμα vα. This is a machinery to ensure that the contractions formed from tensors are coordinate independent. Like the norm of a 4-vector ( which is a rank-1 tensor) n = vμvμ = gμα vμvα. I recommend Sean Carroll's book or lecture notes on this subject. [Posted simultaneously with Muphrid] ## Contravariant and covariant vectors transformations Thank you all very much for such comprehensive replies! So as I can conclude from your posts the covariant and contravariant vectors look like this: http://img841.imageshack.us/img841/9...covarcompl.png In this case, we may think of components as vectors themselves. In green is covariant while in blue is contravariant. The Tensor itself is a product of two Vectors, isn't it? So the output of the Cross Product of two vectors is a Tensor? If so, what are its components? Also, why do we need a metric in a spherical coordinates when we can use Great Circle distance-formula to calculate distance between any two points on a sphere? I wouldn't think of components as vectors themselves; it's just that you can express a single vector either in terms of the tangent basis vectors and components or in terms of the cotangent basis and those components. You don't need the metric to convert between the two--you can do it the hard way if you already know both the tangent basis and the cotangent basis--but you get the same answer as if you'd just used the metric. This is part of why the metric is convenient. Any distance formula implicitly uses the metric--without a metric, the concept of distance doesn't exist. Beyond 3D, there isn't a product of vectors that yields another vector. The "cross product" is abandoned, and we deal in wedge products instead. If you have two vectors $a$ and $b$, the wedge product $a \wedge b$ describes the plane that the two vectors span. Typically, this is where one resorts to index notation. For $C = a \wedge b$, the components of $C$ would be $C^{ij} = a^i b^j - a^j b^i$ for $i \neq j$. While you can write a vector in terms of its components and basis vectors, for this object $C$ you would need a concept beyond basis vectors--you'd need something that covers unit planes and such. While these concepts do exist, they're not typically considered in usual formulations of GR. People just tend to deal in components. Blog Entries: 3 Recognitions: Gold Member The Tensor itself is a product of two Vectors, isn't it? So the output of the Cross Product of two vectors is a Tensor? If so, what are its components? The components of the tensor product are the product of the components of the tensors being multiplied. [tex] (a,b) \otimes (c,d) = \left[ \begin{array}{cc} ac & ad \\\ bc & bd \end{array} \right] [/tex] The diagram you linked shows the parrallel projected components and the perpendicular projected components and is a way to understand them. The important thing is that the the inner product of those vectors is a geometric invariant. So the output of the Cross Product of two vectors is a Tensor, or not? Blog Entries: 3 Recognitions: Gold Member Quote by GRstudent So the output of the Cross Product of two vectors is a Tensor, or not? Yes ! The tensor product of tensors is a tensor. Remember that 4-vectors are rank-1 tensors. Tensors can be made by multiplying tensors or by differentiating tensors or contracting tensors. Just one confusion that I would like to clear: Susskind told that curvature is an obstruction to flatten out the coordinates. So in this sense a sphere would have a curvature, right? Then why does Riemann tensor for sphere is zero? What do you mean when you're talking about a sphere? If you're talking about the 2D surface of a ball, it has curvature, and the Riemann tensor is nonzero. If you're talking about the volume of a ball, that space has no curvature because it's the same as flat space. Recognitions: Gold Member Science Advisor Quote by GRstudent Just one confusion that I would like to clear: Susskind told that curvature is an obstruction to flatten out the coordinates. So in this sense a sphere would have a curvature, right? Then why does Riemann tensor for sphere is zero? The Riemann curvature tensor doesn't identically vanish for the 2 - sphere. I was talking about the volume of a ball, a 3-D sphere. I read that Gammas (Christoff. symb.) were like 1/r etc. And at the end, the components of the Riemann tensor ended up being zero. Quote by GRstudent I was talking about the volume of a ball, a 3-D sphere. I read that Gammas (Christoff. symb.) were like 1/r etc. And at the end, the components of the Riemann tensor ended up being zero. You've run into a bit of geometric pedantry. To distinguish between the surface of a spherical object and the volume that that object contains, "sphere" and "ball" can sometimes no longer mean the same thing. WannabeNewton just did that in his last post: "2-sphere" means "the 2D surface of a 3D spherical object". The object that contains the 2-sphere and the volume within it is called the 3-ball. In short, "sphere" = boundary of a "ball". At any rate, in flat space but using spherical coordinates, some of the Christoffel symbols are nonzero, but the curvature is still zero. This is just an artifact of the coordinates being used. There's really nothing special about the 3D ball. It may as well be infinite, unbounded space. Curvature here means something more fundamental--that the space and time are stretched or distorted in ways that coordinate system changes don't capture. Ok here is a metric for polar coordinates: ds$^{2}$=dx$^{2}$+dy$^{2}$ x=r*cos(θ) y=r*sin(θ) dx=cos(θ)dr-r*sin(θ)dθ dy=sin(θ)dr+r*cos(θ)dθ ds$^{2}$=dr$^{2}$+r$^{2}$dθ$^{2}$ So this is a metric in polar coordinates. Now if I want to actually use it I will need to get rid of differential s to normal s by integrating the metric, right? So that would give me an arc length in polar coordinates. So ∫ds=s=∫sqr(dr$^{2}$+r$^{2}$dθ$^{2}$) So what is this equal to? Also, is it true that dr$^{2}$ term vanishes if I am calculating the arc length on a circle? Thank you all! In practice, I believe you generally consider the path as having some parametric form, so $r = r(\tau)$ and $\theta = \theta(\tau)$. Using $\tau$, the proper time, is most common, but you don't need to know it to parameterize the coordinates--indeed, doing this calculation is one way you find proper time. ∫ds=s=∫sqr(dr^2+r^2dθ^2) How to solve this? Got it. Never mind. Thread Tools | | | | |--------------------------------------------------------------------------|------------------------------|---------| | Similar Threads for: Contravariant and covariant vectors transformations | | | | Thread | Forum | Replies | | | Special & General Relativity | 7 | | | Special & General Relativity | 6 | | | Advanced Physics Homework | 6 | | | Special & General Relativity | 5 | | | Differential Geometry | 62 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934475839138031, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/tagged/puzzle+matrix
Tagged Questions 3answers 278 views How do we solve Eight Queens variation using primes? Using a $p_n$x $p_n$ matrix, how can we solve the Eight queens puzzle to find a prime in every row and column? ... 2answers 590 views Tiling a square I wondered if there was a way to automate the process of finding a way to tile a tile into a square. The idea is to represent the tile with a matrix of 0s for blank space and 1s for filled spaces ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8952457904815674, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/71155/multiplicative-functions-whose-dirichlet-series-have-essential-singularities/71183
Multiplicative functions whose Dirichlet series have essential singularities Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What can be said about the partial sums of a complex-valued completely multiplicative function, let's say bounded by 1 in absolute value, if its Dirichlet series has an essential singularity? As a concrete example, consider the completely multiplicative function defined by $f(p)=i$ for all primes $p$. The Dirichlet series of this function has the Euler product $$L(s,f)=\prod_p \left(1-\frac{i}{p^s}\right)^{-1},$$ and by taking logs, we see that $$\log L(s,f) = -\sum_p \log\left(1-\frac{i}{p^s}\right) = i \sum_p \frac{1}{p^s} + O(1) = i\log\left(\frac{1}{s-1}\right)+\theta(s),$$ where $\theta(s)$ is analytic if $\Re(s)\geq 1$. By taking the limit as $s\to 1$ in this right half-plane, we see that the argument of $L(s,f)$ goes to infinity while its absolute value converges to something non-zero (at least along the real axis), whence $L(s,f)$ has an essential singularity at $s=1$. Standard Dirichlet series techniques (Perron's formula, for example) let us say that the partial sums of $f$, $$S_f(x):=\sum_{n\leq x} f(n),$$ are not $O(x^{1-\epsilon})$ for any $\epsilon>0$, but to my knowledge, this is the best these techniques can achieve. By using a quantitative version of Halasz's theorem, I imagine it should be possible to show that $$S_f(x) \ll x \frac{\log\log^A x}{\log x}$$ for some $A\geq 1$. This bound is pretty good, in that it achieves quantitative savings over the trivial bound $O(x)$, but I have no idea if it is the truth in some sense, and it's also easy to imagine situations in which using Halasz's theorem is not feasible (essentially, $f(p)=i$ is a nice, consistent choice). My question is this: What can be said about $S_f(x)$? Are there good lower bounds for it? Is there a way, even heuristically, to determine its order of magnitude, or even just to get better upper bounds? More generally, are there techniques other than the standard Dirichlet series approach and Halasz's theorem to get good upper bounds on partial sums of such functions? - 1 Answer Hi Robert. For your particular $f$, you'll want to look at this article: A Note on the Compositeness of Numbers A. W. Addison Proceedings of the American Mathematical Society Vol. 8, No. 1 (Feb., 1957), pp. 151-154 Article Stable URL: http://www.jstor.org/stable/2032831 Addison gives the details for the case when $f(p)=\omega$, where $\omega$ is a primitive $3$rd root of unity, but remarks that the the case of $f(p)=i$ (or in general, any $\zeta_n$) is similar. - 2 As for your general question, it appears that certain sorts of essential singularities (e.g., the sort that appear in this example) can be handled by the Selberg--Delange method, which is described in Chapter II.5 of Tenenbaum's book. – Anonymous Jul 25 2011 at 14:03 This seems great. I was interested in that specific example as well as the general case, so this was definitely helpful. Suppose I choose f(p) such that the sum of f(p)/p doesn't converge but also doesn't diverge to infinity in any direction. It seems like this shouldn't be amenable to study using the Selberg-Delange method, at least as presented in Tenenbaum. Are you aware of any techniques which might apply? – rlo Jul 25 2011 at 15:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9449063539505005, "perplexity_flag": "head"}
http://mathinsight.org/determinant_linear_transformation
# Math Insight • Top • Overview • 1D • 2D • 3D • In threads When printing from Firefox, select the SVG renderer in the MathJax contextual menu (right click any math equation) for better printing results ### Determinants and linear transformations #### Suggested background A linear transformation $\vc{T}: \R^n \to \R^m$ (confused?) is a mapping from $n$-dimensional space to $m$-dimensional space. Such a linear transformation can be associated with an $m \times n$ matrix. If we restrict ourselves to mappings within the same space, such as $\vc{T}: \R^n \to \R^n$, then $\vc{T}$ is associated with a square $n \times n$ matrix. One can calculate the determinant of such a square matrix, and such determinants are related to area or volume. It turns out that the determinant of a matrix tells us important geometrical properties of its associated linear transformation. We'll outline this relationship for one-dimensional, two-dimensional, and three-dimensionional linear transformations. #### One-dimensional linear transformations A one-dimensional linear transformation is a function $T(x) = ax$ for some scalar $a$. To view the one-dimensional case in the same way we view higher dimensional linear transformations, we can view $a$ as a $1 \times 1$ matrix. The determinant of the $1 \times 1$ matrix is just the number $a$ itself. Although this case is very simple, we can gather some intuition about linear maps by first looking at this case. An example one-dimensional linear transformation is the function $T(x)=3x$. We could visualize this function by its graph, which is a line through the origin with slope 3. However, instead, let's view it as a mapping from the real line $\R$ back onto the real line $\R$. In this case, we think of the function as $x' = T(x)$, which maps the number $x$ on the $x$-axis to the new number $T(x)$ on an $x'$-axis. $T$ takes the number 1 and maps it to 3. $T$ maps 0 to 0 and -1/2 to -3/2. We also use the language that 3 is the image of 1 under the mapping $T$. We can summarize this mapping by looking how $T$ maps an interval of numbers. For example, $T$ maps the interval $[0,1]$ to the interval $[0,3]$, as illustrated by the following figure. We colored the interval $[0,1]$ and its image $[0,3]$ with a green to red gradient that illustrates how each point in the interval is mapped by $T$. A point of a given color in $[0,1]$ is mapped to a point of the same color in the image $[0,3]$. The fact that the determinant of the matrix associated with $T$ is 3 means that $T$ stretches objects so that their length is increased by a factor of 3. Since the determinant was positive, $T$ preserves the orientation of objects: in both the interval $[0,1]$ and its image $[0,3]$, red points are to the right of green points. Another examples is the linear transformation $T(x)=-\frac{1}{2}x$. As shown in the below figure, $T$ maps the interval $[0,1]$ onto the interval $[-\frac{1}{2},0]$. The determinant of the matrix associated with $T$ is $-\frac{1}{2}$. Since the magnitude of this determinant is $\frac{1}{2}$, $T$ shrinks objects to be half their original length. The negative determinant indicates that $T$ reverses the orientation of objects: as indicated by the colors, the right of the interval $[0,1]$ is mapped onto the left of the interval $[-\frac{1}{2},0]$. In general, the linear transformation $T(x)=ax$ stretches objects to change their length by a factor of $|a|$. If $a$ is positive, $T$ preserves the orientation; if $a$ is negative, $T$ reverses orientation. We will obtain similar conclusions for higher-dimensional linear transformations in terms of the determinant of the associated matrix. #### Two-dimensional linear transformations A two-dimensional linear transformation is a function $\vc{T}: \R^2 \to \R^2$ of the form $$\vc{T}(x,y) = (ax+by,cx+dy) = \left[\begin{array}{cc}a &b\\ c &d\end{array}\right]\left[\begin{array}{c}x\\y\end{array}\right],$$ where $a$, $b$, $c$, and $d$ are numbers defining the linear transformation. We can write this more succinctly as $$\vc{T}(\vc{x}) = A\vc{x},$$ where $\vc{x}=(x,y)$ and $A$ is the $2 \times 2$ matrix containing the constants that define the linear transformation, $$A = \left[\begin{array}{cc}a &b\\ c &d\end{array}\right].$$ We will view $\vc{T}$ as mapping objects from the $xy$-plane onto an $x'y'$-plane: $(x',y')=\vc{T}(x,y)$. As in the one-dimensional case, the geometric properties of this mapping will be reflected in the determinant of the matrix $A$ associated with $\vc{T}$. To begin, we look at the linear transformation $$\vc{T}(x,y) = \left[\begin{array}{cc}-2 &0\\ 0 &-2\end{array}\right]\left[\begin{array}{c}x\\y\end{array}\right].$$ As with all linear transformations, it maps the origin $\vc{x}=(0,0)$ back to the origin $(0,0)$. We can get a feel for the behavior of $\vc{T}$ by looking at its action on the standard unit vectors, $\vc{i}=(1,0)$ and $\vc{j}=(0,1)$. $\vc{T}$ maps $(1,0)$ to $(-2,0)$, and it maps $(0,1)$ to $(0,-2)$. It stretched both vectors by a factor of $2$ and rotated them all the way around by $\pi$ radians. To better visualize the mapping of $\vc{T}$, we can examine how it maps a region in the plane. The below figure shows the mapping of $(x',y')=\vc{T}(x,y)$ on the unit square $[0,1] \times [0,1]$. We colored the quarters of the square in different colors to help visualize how points within the square were mapped. The figure demonstrates that $\vc{T}$ did rotate the square by $\pi$ radians around the origin and stretched each side by a factor of 2. We claimed this behavior should be reflected in the determinant of the associated matrix $$A = \left[\begin{array}{cc}-2 &0\\ 0 &-2\end{array}\right].$$ In this case, $\det(A) = (-2)(-2)-(0)(0) = 4$. The determinant is 4 even though it seemed it was streching everything by a factor of 2. And the determinant was positive even though it rotated everything so that points on the right are mapped to points on left and points on the top are mapped to points on the bottom. Shouldn't we have gotten a negative determinant? The reason for getting a factor of 4 rather than 2 is due to the fact that determinants of $2 \times 2$ matrices reflect area not length. In fact, the absolute value of the determinant of a $2 \times 2$ matrix \begin{align*} \left[ \begin{array}{cc} a_1 & a_2\\ b_1 & b_2 \end{array} \right] \end{align*} gives the area of the parallelogram spanned by the vectors $(a_1,a_2)$ and $(b_1,b_2)$. The mapping $\vc{T}$ stretched a $1 \times 1$ square of area 1 into a $2 \times 2$ square of area 4, quadrupling the area. This quadrupling of the area is reflected by a determinant with magnitude 4. The reason for a positive determinant is that, in two-dimensions, rotations, even all the way around by $\pi$ radians, is not considered changing the orientation. If we go counterclockwise around the perimeter of the mapped square, we still encounter the colors in the order red, green, yellow, blue. Changing orientation would correspond to taking the square out of the $xy$-plane and flipping it before putting it down in the $x'y'$-plane, as we'll see in the next example. The linear transformation $$\vc{T}(\vc{x}) = A\vc{x}, \qquad A = \left[\begin{array}{cc}-1 &-1\\ 1 &3\end{array}\right]$$ should change orientation as $\det(A) = (-1)(3)-(-1)(1) = -2$. It should also increase area by a factor of $|\det(A)| = 2$. Again, we visualize the transformation $\vc{T}$ by looking at how it maps the unit square $[0,1] \times [0,1]$. As shown below, it maps the square into a parallelogram. Inspection of the paralleogram reveals that it indeed has area 2, so $\vc{T}$ doubles area as claimed. the orientation is also reversed. Moving counterclockwise around the perimeter of the parallelogram leads to the opposite color order red, blue, yelow, green. There is no way to stretch and move the original unit square into the parallelogram without taking it out of the plane and flipping it (or somehow moving the region through itself). The actual geometric shape and rotation of the square's image is not captured by the determinant. We cannot tell whether or not $\vc{T}$ will map the square into a square, rectangle, rhombus, or other parallelogram from knowledge of just the determinant of its associate matrix. (We know it has to be some type of parallelogram, as all two-dimensional linear transformations do map parallelograms into parallelograms.) The determinant simply tells us how $\vc{T}$ changes area and whether or not it reverses orientation. You can experiment with these and other linear transformations using the below applet. You can convince yourself that $\vc{T}$ always maps parallelograms onto parallelograms and that the determinant of its associated matrix does capture area stretching and orientation reversing. The Java applet did not load, and the above is only a static image representing one view of the applet. The applet was created with Geogebra. The applet is not loading because it looks like you do not have Java installed. You can click here to get Java. Linear transformation in two dimensions. The linear transformation $\vc{T}=A\vc{x}$, with $A$ specified in the upper left hand corner of the applet, is illustrated by its mapping of a quadrilateral. The original quadrilateral is shown in the $xy$-plane of the left panel and the mapped quadrilateral is shown in the $x'y'$-plane of the right panel. You can change the linear transformation by typing in different numbers and change either quadrilateral by moving the points at its corners. The determinant of $A$ (shown in upper left hand corner) determines how much $\vc{T}$ stretches or compresses area and whether or not it reverses the orientation of the region. The orientation of each quadrilateral can be determined by examining the order of the colors while moving in a counterclockwise direction around its perimeter. Use the + and - buttons of each panel to zoom in and out. (When $\det A =0$, you cannot drag points in the right panel.) #### Three-dimensional linear transformations The reflection of geometric properties in the determinant associated with three-dimensional linear transformations is similar. A three-dimensional linear transformation is a function $\vc{T}: \R^3 \to \R^3$ of the form $$\vc{T}(x,y,z) = (a_{11}x+a_{12}y + a_{13}z, a_{21}x+a_{22}y+a_{23}z,a_{31}x+a_{32}y+a_{33}z) = A\vc{x}.$$ where $$A=\left[\begin{array}{ccc}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{array}\right]$$ and $\vc{x}=(x,y,z)$. The components $a_{ij}$ of $A$ define the linear transformation. As above, the determinant $\det(A)$ reflects some of the geometric properties of the mapping $(x',y',z')=\vc{T}(x,y,z)$. Due to the fact that the absolute value of a determinant gives the volume of a parallelepiped, the absolute value $|\det(A)|$ reflects how much $\vc{T}$ expands volume. The sign of $\det(A)$ reflects whether or not $A$ preserves orientation, as above. If the $\det(A)$ is positve, the associated linear transformation preserves orientation in that it only stretches and rotates objects. On the other hand, if $\det(A)$ is negative, the associated linear transformation reverses orientation by also reflecting the object (taking its mirror image). We illustrate with two examples. The first is the linear transformation associated with the matrix $$A=\left[\begin{array}{rrr}2&1&1\\1&2&-1\\-3&-1&2\end{array}\right].$$ Since $\det(A) = 12$, the linear transformation $\vc{T}(\vc{x}) = A\vc{x}$ expands the volume of objects by a factor of 12. Since the determinant is positive, it preserves the orientation of objects. The action of $\vc{T}$ on the unit cube $[0,1] \times [0,1] \times [0,1]$ is illustrated in the following applet. $\vc{T}$ rotated the cube and stretched it into a parallelepiped of volume 12. You can confirm that $\vc{T}$ preserved orientation by comparing faces that have the same four colors and checking if the colors have the same order when moving counterclockwise. Since both the cube and the parallelepiped have faces with colors ordered red, yellow, white, magenta as one moves counterclockwise, the linear transformation preserved orientation, as it must given that $\det(A)$ is positive. You can change the cube into other parallelepipeds, and $\vc{T}$ always maps it to another parallelepiped, as it must. A three-dimensional linear transformation that preserves orientation. The linear transformation $\vc{T}(\vc{x}) = A\vc{x}$, where $$A=\left[\begin{array}{rrr}2&1&1\\1&2&-1\\-3&-1&2\end{array}\right]$$ maps the unit cube to a parallelepiped of volume 12. The expansion of volume by $\vc{T}$ is reflected by that fact that $\det A = 12$. Since $\det A$ is positive, $\vc{T}$ preserves orientation, as revealed by the face coloring of the cube and parallelogram. The order of the colors on corresponding faces, when moving in a counterclockwise direction, is the same for both the cube and the parallelepiped. For example, both objects have a face with the counterclockwise color order blue, magenta, white, cyan. You can change the cube to other parallelepipeds by dragging the points on four of its vertices. Another example is the linear transformation associated with the matrix $$B=\left[\begin{array}{rrr}3&1&-3\\1&3&-2\\1&1&-3\end{array}\right].$$ Since $\det(B)=-14$, the linear transformation $\vc{T}=B\vc{x}$ stretches volume by the factor $|\det(B)| = 14$. In this case, since $\det(B)$ is negative, the linear transformation reverses orientation. The reversal of orientation can be seen in the below applet illustrating the mapping of the unit cube $[0,1] \times [0,1] \times [0,1]$. $\vc{T}$ maps the cube into a parallelepiped of volume $14$, but also reflects the cube in the process. This reversal of orientation can be observed by noticing that the parallelepiped has a face with the colors in the reverse order red, magenta, white, yellow, when moving counterclockwise. A three-dimensional linear transformation that reverses orientation. The linear transformation $\vc{T}=B\vc{x}$, with $$B=\left[\begin{array}{rrr}3&1&-3\\1&3&-2\\1&1&-3\end{array}\right]$$ maps the unit cube to a parallelepiped of volume 14. The expansion of volume is reflected by the determinant $\det B = -14$. Since $\det B$ is negative, $\vc{T}$ not only expands volume by a factor of 14 but also reverse orientation, i.e., reflects objects into their mirror image. The reversal of orientation can be observed through the order of the colors on corresponding faces of the cube and parallelepiped. For example, the cube has a face with the colors ordered black, red, magenta, blue, when moving counterclockwise, while the parallelepiped has a face with the counterclockwise color order black, blue, magenta, red. You can change the cube to other parallelepipeds by dragging the points on four of its vertices. #### Cite this as Nykamp DQ, “Determinants and linear transformations.” From Math Insight. http://mathinsight.org/determinant_linear_transformation Keywords: area, determinants, length, linear transformation, mapping, orientation, scaling, transformation, volume
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 148, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9032397270202637, "perplexity_flag": "head"}
http://mathoverflow.net/questions/32527/measures-of-the-complexity-of-a-metric/32922
## Measures of the complexity of a metric ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am seeking a measure of the "complexity" of a surface $S$, a quantity that reflects how widely the metric varies from spot to spot. I am primarily interested in surfaces topologically equivalent to a sphere in $\mathbb{R}^3$, so measures that rely on the genus are not useful. Ideally the measure would achieve its minimum for a (round) sphere, would be larger but still small for closed convex surfaces, and large for surfaces with steep mountains and plummeting valleys. Ultimately I need to discretize the measure, but I would like to understand what are the alternatives for smooth metrics. I can concoct reasonable ad hoc measures, but I'd prefer to start from a more principled foundation. From its name, the entropy of a Riemannian manifold sounds like it might be appropriate, but I have only a tenuous grasp of this concept, so it is unclear to me if this aligns with my goals. I've also looked at the systolic ratio and several other geodesic-based concepts, but none seem to capture what I want. I'd appreciate pointers to concepts in this general intellectual neighborhood. Thanks! Addendum. Thanks for the useful suggestions: normalized surface area, Bregman divergence, Gromov-Hausdorff metric, Willmore energy. My question was too vague to permit a definitive answer, but I'll accept Will Jagy's suggestions on the Willmore energy, which taught me much. - You might want to look into something called discrete differential geometry, used mainly for computer graphics. It doesn't provide a formula, but it could help you measure your surface. In general, something like bregman divergence from a sphere would be good, I think. (Though I don't know exactly how you'd measure it). – DoubleJay Jul 19 2010 at 19:47 Are you looking for an intrinsic invariant or one that might depend on the embedding into $R^3$? – Deane Yang Jul 19 2010 at 20:09 @Deane: Good question! I think I want intrinsic, but I'm not really certain. Sorry to be so vague, but this is at an exploratory stage. – Joseph O'Rourke Jul 19 2010 at 20:14 @DoubleJay: Thanks for mentioning the "Bregman divergence," new to me. – Joseph O'Rourke Jul 19 2010 at 20:17 Here is the dude: math.tu-berlin.de/~bobenko – Will Jagy Jul 20 2010 at 3:39 ## 4 Answers I think you would be pretty happy with the Willmore functional for, well, compact orientable $C^\infty$ surfaces in $\mathbb R^3.$ It is just the integral of the square of the mean curvature or $$\frac{1}{2 \pi} \int_{M^2} \; \; H^2 \; dS$$ This quantity is at least 2, and is only equal to 2 for a round sphere. The Willmore Conjecture is that the minimum for an imbedded torus is achieved on the torus (sometimes called the Clifford torus, by the Bryant correspondence) created by revolving a circle of radius 1 with its center at distance $\sqrt 2$ from the axis of revolution. Here the functional has value $\pi.$ Leon Simon proved that the minimum (a priori the infimum) is achieved. Rob Kusner found some rather earlier references (before Willmore) to this problem. See, for example, "Total Curvature in Riemannian Geometry" by Thomas J. Willmore. I do not expect there would be much trouble making a discrete version of this. NOTE: sometimes Willmore writes with the $2 \pi$ divisor, sometimes not. I found a nice wiki page and some pdf's with references and other information, one a schedule for an October 2010 seminar at Oberwolfach. Anyway, http://en.wikipedia.org/wiki/Willmore_energy and http://www.mfo.de/programme/schedule/2010/43b/programme1043b.pdf and http://www.warwick.ac.uk/~maseq/wmsri.pdf and http://www.math.ethz.ch/~riviere/papers/riviere-tartar.pdf I was not aware of this, it seems the discrete version of this has been worked out, a fair amount published, including treatment in a book, "Discrete differential geometry" by Alexander I. Bobenko, which can be viewed with google books. I ran google with "discrete willmore functional." - @Will: Ah, very nice! This may be quite useful. Thanks! – Joseph O'Rourke Jul 19 2010 at 20:18 I see that this integral is invariant under conformal maps, a useful property. – Joseph O'Rourke Jul 19 2010 at 22:43 1 Embarrassingly, I actually own a (signed!) copy of Bobenko's book, and did not recognize the usefulness of his coverage of the Willmore flow! – Joseph O'Rourke Jul 20 2010 at 12:21 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I don't think that the Willmore energy is a good measure for the varying of the metric. First of all it is an extrinsic measure, so it measures how the surface lies in the space (In that situation it is, of course, a very good measure). Moreover, the Willmore functional is conformally invariant, so you can apply conformal trnasformations on R3 (better S3) which will change the surface metric in a dramatically way (in general), but it does not change the Willmore functional. Instead, I would say the best mertic is the one of constant curvature. Of course, there is a energy functional on metrics which has as minimizers the metrics of constant curvature. Take that one. - Since your metrics embed into R3, you could use the surface area (taken after normalizing the volume) for sufficiently well-behaved metrics. This has the advantage of being easy to calculate in many cases. - @Robin: Yes, this is one of the "ad hoc" measures I thought of. My concern is that two surfaces that differ in intuitive "complexity" might have the same normalized area. Which raises another question: What do all those surfaces with a given normalized area look like? – Joseph O'Rourke Jul 19 2010 at 20:46 Well, that depends on what intuitive notion of "complexity" you're using. Under the surface-area definition, for example, a highly eccentric ellipsoid will look the same as a sphere with very shallow but convoluted wrinkles, since you're not taking variation in "diameter" into account. – Robin Saunders Jul 19 2010 at 21:06 Another thought is to use the Gromov-Hausdorff metric between metric spaces, where one of the spaces could be the intrinsic metric on the sphere. http://en.m.wikipedia.org/wiki/Gromov–Hausdorff_convergence Also see this paper by Memoli: http://math.stanford.edu/~memoli/ShapeComp/sc-simple.html -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372268319129944, "perplexity_flag": "middle"}
http://scicomp.stackexchange.com/questions/3275/gram-schmidt-method-to-identify-linearly-dependent-vectors
# Gram-Schmidt method to identify linearly dependent vectors A method to orthogonalize a set of vectors (vectors of unit length that are mutually orthogonal) is the Gram-Schmidt process: http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process Note that the the nested for loops imply that the difference is between vector $v_i$ and vectors $\{v_1, \dots, v_{i-1}\}$. However, I wonder about the behavior of the process in case the set of input vectors is linearly dependent. Namely, suppose the process is initialized on vectors $\{v_1, v_2, v_3, v_4, v_5\}$. In case $v_4$ can be expressed as a linear combination of $\{v_1, v_2, v_3\}$, the process will set $v_4=0_n$ (i.e, zero vector). Now suppose that $v_2$ can be expressed as a linear combination of $\{v_3, v_4, v_5\}$. Would such ordering of vectors imply that the Gram-Schmidt process will fail to set $v_2=0_n$ (fail to report linear dependence)? If so, what is the common way to ensure that Gram-Schmidt would yield orthonormal vectors and report linearly dependent ones? - ## 2 Answers If $v_2$ can be expressed as a linear combination of $\{ v_3, v_4, v_5 \}$, but not of a subset of these vectors, then, e.g., $v_5$ is in the linear span of $\{ v_2, v_3, v_4 \}$. So your algorithm would likely map $v_5$ onto $0$. Typically, if you apply the Gram-Schmidt proecess to the sequence of $v_i$, $1 \leq i \leq L$, a vector $v_k$ is mapped onto $0$ if it is in linear span of the $v_1, \dots, v_{k-1}$, in which case you probably drop it from further considerations. Eventually, your algorithm will produce an orthonormal basis $\{ w_1, \dots, w_l \}$ of the linear span of $\{ v_1, \dots, v_L \}$, $l \leq L$ Of course, this only holds up to rounding errors. Furthermore, you would probably use the stabilized Gram-Schmidt process. An interesting follow-up question is when a vector should be regarded as numerically $0$. - So, regardless of the ordering, the GS process will set one of the vectors to $0_n$? What does it mean "if $v_2$ can be expressed as a linear combination of $\{2_3, v_4, v_5\}$", then "$v_5$ is in the linear span of $\{v_2, v_3, v_4\}$? This means that $v_5$ is a linear combination of $\{v_2, v_3, v_4\}$? Your second paragraph repeats the statement. What if, e.g., $v_2$ is in the linear span of $\{v_3, v_4, v_5\}$? – usero Sep 13 '12 at 14:50 Yes, the GS process will return a 0 0-vector, up to rounding errors of course. You should know that being a linear combination of a set of vectors is by definition the same as being in their span. As the second paragraph says, if v 2 ∈span{v 3 ,v 4 ,v 5 } v_2 \in \operatorname{span}\{v_3,v_4,v_5\}, then one of the vectors {v 3 ,v 4 ,v 5 } \{v_3,v_4,v_5\} will be processed as the 0 0-vector. – Martin Sep 13 '12 at 15:27 Thanks. The ambiguity was caused by yours "expressed as a linear combination" followed by "is in the linear span", both within the same sentence (and yet, they are the same). – usero Sep 13 '12 at 15:37 While it doesn't answer your explict question, I offer the following important remark that might answer the intentions behind the question. The anomalies of the Gram-Scmidt process in case of dependence are avoided if one computes the orthogonalization instead by the standard Householder $QR$ factorization (or in the sparse case the Givens variant). The latter is also much more stable and hence usually the method of choice. Dependence is then simply reflected by one or more zero (or in finite precision arithmetic) tiny diagonal entry in $R$. Column pivoting helps by moving these tiny entries towards the bottom of $R$. - Gram-Schmidt is one algorithm for computing $QR$ factorizations. You seem to be referring specifically to Householder (preferred in practice because orthogonality is more stable), especially the "rank-revealing" versions. – Jed Brown Sep 14 '12 at 14:10 @JedBrown: Yes, Householder or Givens. I updated my answer accordingly. – Arnold Neumaier Sep 14 '12 at 15:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275040626525879, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/152082/differentiable-function-of-two-variables/152088
# Differentiable function of two variables Find the domain of a function $f(x,y)=\log_xy$. Does $f\in C^1$? (I don't know if this symbol is common - it means that $f$ is at least once differentiable, and its first derivative is continuous). from basic facts the domain is: $D=\mathbb{R}^2 \setminus \left\{ (x,y) : x=1 \vee y\le 0 \vee x\le 0 \right\}$ partial derivatives (maybe it will useful): $\frac{\partial}{\partial x}f(x,y)=-\frac{\ln y}{x\ln^2 x}$ $\frac{\partial}{\partial y}f(x,y)=\frac{1}{y\ln x}$ but I don't understand what is necessary for $f\in C^1$. Do I just need to check if partial derivatives are both continuous for all $x,y\in D$? I would be very grateful for help. - ## 1 Answer The statement $f \in C^1$ means that $f$ is continuously differentiable. There is a theorem that says that if each of the partials exists and is continuous at $x$, then the function is continuously differentiable at $x$. In your case, the partials exist and are continuous on $D$, hence $f \in C^1$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.971251368522644, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/10/10/first-degree-homogeneous-linear-equations-with-constant-coefficients/?like=1&source=post_flair&_wpnonce=5092e672ce
# The Unapologetic Mathematician ## First-Degree Homogeneous Linear Equations with Constant Coefficients Now that we solved one differential equation, let’s try a wider class: “first-degree homogeneous linear equations with constant coefficients”. Let’s break this down. • First-degree: only involves the undetermined function and its first derivative • Homogeneous: harder to nail down, but for our purposes it means that every term involves the undetermined function or its first derivative • Linear: no products of the undetermined function or its first derivative with each other • Constant Coefficients: only multiplying the undetermined function and its derivative by constants Putting it all together, this means that our equation must look like this: $\displaystyle a\frac{df}{dx}+bf=0$ We can divide through by $a$ to assume without loss of generality that $a=1$ (if $a=0$ then the equation isn’t very interesting at all). Now let’s again assume that $f$ is analytic at ${0}$, so we can write $\displaystyle f(x)=\sum\limits_{k=0}^\infty a_kx^k$ and $\displaystyle f'(x)=\sum\limits_{k=0}^\infty (k+1)a_{k+1}x^k$ So our equation reads $\displaystyle0=\sum\limits_{k=0}^\infty\left((k+1)a_{k+1}+ba_k\right)x^k$ That is, $a_{k+1}=\frac{-b}{k+1}a_k$ for all $k$, and $a_0=f(0)$ is arbitrary. Just like last time we see that multiplying by $\frac{1}{k+1}$ at each step gives $a_k$ a factor of $\frac{1}{k!}$. But now we also multiply by $(-b)$ at each step, so we find $\displaystyle f(x)=\sum\limits_{k=0}^\infty\frac{(-b)^kx^k}{k!}=\sum\limits_{k=0}^\infty\frac{(-bx)^k}{k!}=\exp(-bx)$ And indeed, we can rewrite our equation as $f'(x)=-bf(x)$. The chain rule clearly shows us that $\exp(-bx)$ satisfies this equation. In fact, we can immediately see that the function $\exp(kx)$ will satisfy many other equations, like $f''(x)=k^2f(x)$, $f'''(x)=k^3f(x)$, and so on. ## 3 Comments » 1. [...] and Cosine Now I want to consider the differential equation . As I mentioned at the end of last time, we can write this as and find two solutions — and — by taking the two complex [...] Pingback by | October 13, 2008 | Reply 2. [...] can be used to express analytic functions. Then I showed how power series can be used to solve certain differential equations, which led us to defining the functions sine and cosine. Then I showed that the sine function must [...] Pingback by | October 16, 2008 | Reply 3. Congratulation Sir. The power series has been commonly used to solve second order ODE. But, here you have explored to solve the first order ODE. I obtain so many information from you. Comment by | November 29, 2008 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358781576156616, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/70953/show-by-example-that-the-minkowski-sum-of-two-sets-xy-may-be-convex-even-if-n
# Show by example that the Minkowski sum of two sets $X+Y$ may be convex even if neither $X$ nor $Y$ are convex There were two parts to this question. I proved that the Minkowski sum of two sets $X+Y$ is convex whenever $X$ and $Y$ are convex, but how do I prove this second part? "Show by example that the Minkowski sum of two sets $X+Y$ may be convex even if neither $X$ nor $Y$ are convex." - ## 3 Answers $X=\mathbb{R}\setminus\{0\}$ is not convex but $X+X=\mathbb{R}$ is. - +1 Nice, clean example. If $a > 0$, one could also use $X = (-a,0) \cup (0,a)$. Then, $X+X = (-2a,2a)$. – JavaMan Oct 9 '11 at 3:16 Sorry I still don't get it. Can u prove your example? Thank u so much~ – xuan Oct 9 '11 at 5:47 What is there not to get? – scineram Oct 9 '11 at 8:47 The example is really straight forward but I think I need a formal prove which I don't know how... – xuan Oct 9 '11 at 9:54 @xuan: Take $x=-1\in X$, $y=1\in X$ and $t=1/2\in[0,1]$. Then $(1-t)x+ty=0\notin X$, so $X$ is not convex. $X+X=\mathbb{R}$ is the fact that any real number can be written as a sum of two nonzero real numbers; e.g. $1=1/2+1/2$ and $x=(x-1)+1$ for $x\neq 1$. That $\mathbb{R}$ is convex is an easy consequence of the definition of convexity. – LostInMath Oct 9 '11 at 10:16 A stupid example: Let $X$ be any non-convex set and let $Y=\emptyset$. Then $X+Y=\emptyset$ is convex! - 4 But $Y$ is convex. – scineram Oct 9 '11 at 8:46 If this helps you to understand the previous demonstration, you can think of the Minkowski Sum as a painting operation over the points of $X$ using $Y$ as a brush. Now think of a convex set $X$ minus one interior point - and that is enough not to make it convex anymore. This point or even small gaps will be painted over by the brush $Y$ standing in nearby points, if the brush is 'fat' enough. The resulting set will be made convex again. Since a typical point of $X+Y$ will be covered by different points of the brush $Y$ standing at different points of $X$, you can remove points of the brush and it will still paint the same region. One of my favourite examples of this is the Minkowski Sum of two unit circles. They look very 'thin', but when you use one as a brush over the other you will see it is enough to cover a solid disk of radius 2. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478543996810913, "perplexity_flag": "head"}
http://www.openwetware.org/wiki/Physics307L_F08:People/Smith/Notebook/7
# Lab 7: Poisson Statistics Lab Partner: Kyle Martin ## Preface The lab manual we have been using for this class, which is last year's lab manual written by Dr. Gold, has a very sparse section for the Poisson Statistics lab. Kyle and I referred to it for the basic premise of the measurements we took and I referred to it for some basic ideas for data analysis. ## Purpose The overall goal of this lab is familiarization with multichannel analyzers and Poissonian data. Reading through my colleague's notebooks (yes, there is something to be said for open science, and for the use of wikis), I came up with some important questions I wanted to answer in the course of this lab. Specifically, • Are the random, independent events of muons (see here for more information about muons and muon sources) striking a scintillation detector in our laboratory described accurately by a Poisson distribution? • Is the standard deviation of the number of events we measured described accurately by the standard deviation of a Poisson distribution? • Does the goodness-of-fit of the Poisson distribution change with the anticipated number of events? • Does a Gaussian distribution accurately represent random, independent events? • Does the goodness-of-fit of the Gaussian distribution change with the anticipated number of events? ## Materials For this lab, we used • A box of lead bricks • A Thallium doped sodium iodine crystal scintillator (see here, here and here for more about this) • A preamp, amplifier and discriminator ("PAD") • 01:06, 10 December 2007 (CST):Ohhhh, so that's what that is? I tried looking it up, and look what I found: constant fraction discriminator How useful would that be for the speed of light lab, huh? • A PC data acquisition card with "hydra breakout cable" (which is a connector with many "heads") (a photo would be nice here... I don't have one) • Multichannel Analyzer Software • A NIM-bin • A High-Voltage DC power supply for the PMT • Mathworks' MATLAB software for analyzing the data collected and answering my questions above ### Setup The photomultiplier tube (PMT) and scintillator are placed in the box of lead bricks, in order to block sources of radiation we don't wish to measure. The PMT has a large potential across it, provided by the high voltage DC power supply. The scintillator is attached to the PMT. Muons striking the scintillator will create ultraviolet photons which are "detected" by the PMT - incident photons will cause a drop in voltage across the PMT. The scintillation detector (the phototube and attached scintillator) is connected with a BNC cable to the amplifier and discriminator module, which in turn is connected to the data acquisition card in the PC. Voltage drops across the PMT, corresponding to incident muons, will be recorded by the multichannel analyzer software. We don't care about the energy of the muons here, so we don't bother to record the current produced by the PMT, we just record when they occur. For acquiring data, Kyle and I turned the amplifier down ("minimum gain") and didn't mess with the discriminator much. I wish I had remembered to record what settings we used, but they aren't really all that important. The goal here is to gather the times when some random events happen and examine their distribution; setting the amplifier higher would probably just increase the sensitivity of the muon detection, making the number of events we record higher for any given time interval, and setting the discriminator would do something similar by selecting the threshold voltage of events to pass along to the PC data acquisition card. ## Methods • The multichannel analyzer software will record events it receives for a given time interval ("dwell time") in one "channel". Setting the MCA software to collect 512 channels at a dwell time of 1 millisecond, for instance, will record events which happen between 0-1ms in channel 1, events that happen between 1-2ms in channel 2 and so on until it records 512 channels. The MCA software will display the results on screen as a plot of number of events vs. channel number. Saving the results of a measurement of 512 channels will create an ASCII (or text, if you're like me and don't care for high-falutin' nonsense) file. This file will have several lines of text to start with which record some parameters of the MCA software, followed by 512 rows of 3 columns of numbers, separated by commas. The first column of this file is the channel number, the second column is the number of events recorded and the third column is something else (I've got not idea what it's for, maybe for pulse height analysis.) • Using the multichannel analyzer software on the PC, Kyle and I took several measurements using our setup. We measured the number of events occurring for dwell times of 1 ms, 2 ms, 4 ms, 8 ms, 10 ms, 20 ms, 40 ms, 80 ms, 100 ms, 200 ms, 400 ms, 800 ms, 1 s, 2 s and 4 s. The 1 ms, 2 ms, 4 ms, 8 ms, 10 ms, 20 ms, and 40 ms dwell-time measurements were taken with 1024 channels, the 80 ms, 100 ms, 200 ms, 400 ms, 800 ms, 1 s, 2 s, and 4 s dwell-time measurements were taken with 256 channels. The output of each of these measurements were saved to file. • I loaded these files in MATLAB to examine the distribution of events we recorded. ## Data and Analysis Using MATLAB, I wrote an "M-file", or MATLAB script, to load and examine our data. I used MATLAB's "Publish to HTML" function to save both the code and output (with figures, saved as .png files) as HTML files. I zipped these files, and uploaded the zip file. It can be downloaded here. I went a little bit overboard in scripting this: the script itself is more than 250 lines long. I'm sure if I were actually good at coding in MATLAB, this script would be significantly different (i.e. better!), but I'm just learning MATLAB.SJK 01:11, 10 December 2007 (CST) 01:11, 10 December 2007 (CST) That is great that you learned some MATLAB! I downloaded the .zip and looked at the html output, which is pretty slick. I will post a description of what I did in my MATLAB script, along with relevant figures from its output. But, first, I will try to describe some relevant details about Poisson and Gaussian distributions. ### About Gaussian Distributions The Gaussian (or "normal") probability density function is $\frac1{\sigma\sqrt{2\pi}}\; \exp\left(-\frac{\left(x-\mu\right)^2}{2\sigma^2} \right) \!$ Where σ is the standard deviation of the data, and μ is the expected value. In our case, I believe the "expected value" is the mean of our data and the "standard deviation" is the standard deviation of our data. ### About Poisson Distributions The Poisson probability mass function (analogous to the probability density function, but for discrete values) is $f(k;\lambda )=\frac{e^{-\lambda } \lambda^k}{k!},\,\!$ Where λ is the number of events per time interval and k is the number of events. ### MATLAB Script Summary and Output • I first calculated the standard deviations of the number of events we recorded (separately for each run of different dwell times, of course.) These standard deviations will later be compared to the standard deviations of a Poisson distribution with parameters found from our data (λ , or number of events per time interval). • For calculating standard deviation, I used the equation $s = \left( \frac{1}{n-1} \sum_{i=1}^n (x_i - \bar{x})^2 \right) ^{\frac{1}{2}}$, where s is the standard deviation, n is the number of items in the sample, xi is the ith item in the sample x, and $\bar{x}$ is the mean of the sample x. • I then plotted our data. I created 3 figures with 4 subplots and 1 figure with 3 subplots. The subplots show something similar to what was displayed by the MCA software: a plot of the number of events vs. the channel. • In order to compare the distribution of our data to the a Poisson distribution and Gaussian distribution, I did the following: • I estimated the lambda for each run. These numbers can be seen in Table 1, and I made of plot of these numbers vs. dwell time (see Figure 5). • In order to estimate the lambda, I used the equation $\lambda_{MLE} = \frac{1}{n} \sum_{i=1}^n x_i$, where λMLE is the "Maximum Likelihood Estimate" of lambda, n is the number of items in the sample, and xi is the ith item in sample x. I tend to use λ in place of λMLE in my figures, so keep that in mind. If my method of determining the λMLE produces an inaccurate estimate of λ, most of my figures will be affected. • I used the lambda estimated above to create vectors of the probability of seeing k number of events using a Poisson probability density function, and to create vectors of the probability of seeing k number of events using a Gaussian probability density function. • In order to estimate the probability of seeing k number of events using a Poisson MDF with λ events per time interval, I used $y = \frac{\lambda^k}{k!}e^{-k}$, where λ is λMLE determined earlier and k is the number of events I wish to evaluate at. • In order to estimate the probability of seeing k number of events using a Gaussian PDF with λ as the mean and the standard deviation I calculated earlier, I used $y = \frac{1}{\sigma \sqrt{2\pi}} e^{\frac{-(k-\mu)^2}{2\sigma ^2}}$, where μ is the mean (also, interestingly, λMLE), σ is the standard deviation, and k is the number of events I wish to evaluate at. • I created a histogram vector from our data: the probability of seeing k number of events. To calculate the "probability" of seeing k number of events, I divided the frequency of k number of events by the total number of events. I think this makes sense, since the sum of this vector is 1. • I plotted the three distributions (our data pdf, Poisson pdf, and Gaussian pdf) vs. k number of events. I created separate figures for each run. These figures can be seen on this page as Figures 6-20. • For each run, I found what I think is the Chi-Square goodness-of-fit of the Poisson PDF to our data, and the Chi-Square goodness-of-fit of the Gaussian PDF to our data. I had a bit of trouble figuring out how to use the Chi-Square distribution to measure the goodness-of-fit of distributions to our data, and I may have ended up doing it wrong, but heres what I did: • In order to calculate the Chi-Square goodness-of-fit of the two different probability functions to our data, I used $\chi^2 = \sum \frac{(X_i - \mu_i)^2}{\sigma_i^2}$, where Xi is the probability of seeing k events we measured,μi is the probability 'prediction' for that number of events and σi is the standard deviation.SJK 01:21, 10 December 2007 (CST) 01:21, 10 December 2007 (CST) I'd have to think about this some more too...I think it's a great goodness of fit measure, but not sure whether it's a parallel of chi-squared. • In order to determine whether the goodness-of-fit of Poisson distribution and the goodness-of-fit of a Gaussian distribution varies with λ, I created a log-log plot of Chi-Square vs. Lambda. This plot can be seen as Figure 21. • In order to compare the standard deviation of our data to the standard deviation of a Poisson distribution, I created a log-log plot of the standard deviation of our data and the standard deviation of a Poisson distribution vs. dwell time. This figure can be seen as Figure 22. ### Figures and Tables Table 1: Maximum Likelihood Estimate of λ and standard deviations Dwell Time λMLE Standard Deviation of Data Standard Deviation of Poisson Distribution with λMLE 1ms 0.00586 0.088 0.077 2ms 0.0117 0.125 0.108 4ms 0.0264 0.183 0.162 8ms 0.0684 0.330 0.252 10ms 0.0723 0.311 0.269 20ms 0.147 0.503 0.384 40ms 0.287 0.644 0.536 80ms 0.641 1.02 0.800 100ms 0.648 1.05 0.805 200ms 1.55 1.46 1.24 400ms 2.75 2.22 1.66 800ms 5.86 2.90 2.42 1s 7.14 3.28 2.67 2s 14.6 4.83 3.82 4s 28.6 7.03 5.35 Table 2: χ2 goodness-of-fit for Poisson and Gaussian distributions Dwell Time Poisson PDF χ2 to data histogram Gaussian PDF χ2 to data histogram 1ms 0.0007 2267 2ms 0.0014 461.1 4ms 0.0023 62.49 8ms 0.0132 2.530 10ms 0.0092 2.498 20ms 0.0268 0.023 40ms 0.0323 0.1548 80ms 0.0259 0.1242 100ms 0.0318 0.1312 200ms 0.0081 0.0154 400ms 0.0027 0.0037 800ms 0.0007 0.0008 1s 0.0005 0.0007 2s 0.0003 0.0003 4s 0.0001 0.0001 SJK 01:19, 10 December 2007 (CST) 01:19, 10 December 2007 (CST) These are all very nice figures, and Figure 21 is particularly cool. A very nice way to plot things. I don't have time to think further now, but I am curious as to why the chi-square peaks for the Poisson where it does. I suppose maybe that's because it has to go to zero for low lambda (if you get all zero counts, the fit can be perfect)...maybe you talk about this, I haven't read everything yet. | | | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------| | Figure 1: Plot of MCA output: Number of events vs. Channel Number, Dwell Times 1ms, 2ms, 4ms, 8ms | Figure 2: Plot of MCA output: Number of events vs. Channel Number, Dwell Times 10ms, 20ms, 40ms, 80ms | Figure 3: Plot of MCA output: Number of events vs. Channel Number, Dwell Times 100ms, 200ms, 400ms, 800ms | Figure 4: Plot of MCA output: Number of events vs. Channel Number, Dwell Times 1s, 2s, 4s | | Figure 5: Lambda vs. Dwell Time, loglog plot. I used this to eyeball my estimated lambdas for plausibility. Since the plot is more-or-less linear, I think my estimated lambdas are ok. | Figure 6: Probability vs. Number of Events, Dwell Time 1 ms. Shown are the data histogram, Poisson PMF and Gaussian PDF. | Figure 7: Probability vs. Number of Events, Dwell Time 2 ms. Shown are the data histogram, Poisson PMF and Gaussian PDF. | Figure 8: Probability vs. Number of Events, Dwell Time 4 ms. Shown are the data histogram, Poisson PMF and Gaussian PDF. | | Figure 9: Probability vs. Number of Events, Dwell Time 8 ms. Shown are the data histogram, Poisson PMF and Gaussian PDF. | Figure 10: Probability vs. Number of Events, Dwell Time 10 ms. Shown are the data histogram, Poisson PMF and Gaussian PDF. | Figure 11: Probability vs. Number of Events, Dwell Time 20 ms. Shown are the data histogram, Poisson PMF and Gaussian PDF. | Figure 12: Probability vs. Number of Events, Dwell Time 40 ms. Shown are the data histogram, Poisson PMF and Gaussian PDF. | | Figure 13: Probability vs. Number of Events, Dwell Time 80 ms. Shown are the data histogram, Poisson PMF and Gaussian PDF. | Figure 14: Probability vs. Number of Events, Dwell Time 100 ms. Shown are the data histogram, Poisson PMF and Gaussian PDF. | Figure 15: Probability vs. Number of Events, Dwell Time 200 ms. Shown are the data histogram, Poisson PMF and Gaussian PDF. | Figure 16: Probability vs. Number of Events, Dwell Time 400 ms. Shown are the data histogram, Poisson PMF and Gaussian PDF. | | Figure 17: Probability vs. Number of Events, Dwell Time 800 ms. Shown are the data histogram, Poisson PMF and Gaussian PDF. | Figure 18: Probability vs. Number of Events, Dwell Time 1 s. Shown are the data histogram, Poisson PMF and Gaussian PDF. | Figure 19: Probability vs. Number of Events, Dwell Time 2 s. Shown are the data histogram, Poisson PMF and Gaussian PDF. | Figure 20: Probability vs. Number of Events, Dwell Time 4 s. Shown are the data histogram, Poisson PMF and Gaussian PDF. | | Figure 21: χ2 vs. lambda. As lambda increases, the goodness-of-fit of the Gaussian PDF approaches that of the Poisson PMF. | Figure 22: Standard deviations of data and Poisson distribution vs. Dwell Time. The standard deviation of the Poisson distribution is consistently lower than that of the raw data. | Figure 23: Probability vs. Number of Events, Dwell Time 4 s. Shown are the data histogram, Poisson PMF and Gaussian PDF. For the Gaussian PDF, I used the Poisson standard deviation ($\sqrt{\lambda}$) as the standard deviation of the Gaussian distribution. The resulting Gaussian distribution fits the Poisson distribution very closely, shifted by a bit (maybe half of a bin?) | | ## Conclusions and Remarks In my purpose, I wanted to answer several questions. I'll try briefly answer them here. • Are the random, independent events of muons striking a scintillation detector in our laboratory described accurately by a Poisson distribution? • While difficult to answer conclusively, I believe the answer to this question is yes. As evidence, I present the Chi-Square goodness-of-fit of the Poisson PMD for my data. From what I've read, if this number is less than 1 or so, the fit is good. Since my Chi-Square ranges from 0.0001 to 0.032, I believe my fit is excellent. • 01:26, 10 December 2007 (CST):This is what I was hinting above. It may be that you were over-estimating your sigma or something. I haven't spent time looking at it unfortunately. When the distribution is normal, the reduced chi-squared (chi-squared per degree of freedom) should be about 1 for a good fit, I think. If it is way too low, it means the errors are over-estimated. The fact that yours tends to zero, probably means it is not quite a chi-squared parallel. But a great method none-the-less. • Is the standard deviation of the number of events we measured described accurately by the standard deviation of a Poisson distribution? • Qualitatively, the answer is sort of. In Figure 22, I demonstrate that the standard deviation of my data is always a bit larger than the standard deviation of a Poisson distribution (the square root of lambda). I don't know why this is; systematic error is a probable culprit, although I'm unsure what exactly would cause this. Somehow, I think the events that the MCA counted were not completely random and independent, or maybe the discriminator didn't pass along many events it should have, or something else happened. • Does the goodness-of-fit of the Poisson distribution change with the anticipated number of events? • Yes. As lambda increases, as seen in Table 2 and Figure 21, the goodness-of-fit improves. This is a relatively small amount compared to the Gaussian distribution's goodness-of-fit, though. • Does a Gaussian distribution accurately represent random, independent events? And does the goodness-of-fit of the Gaussian distribution change with the anticipated number of events? • If the number of anticipated events you are examining is large enough, I believe the Gaussian distribution accurately represents what you see. As Figure 22 and Table 2 demonstrate, the goodness-of-fit drastically changes as lambda changes. I was wholly unfamiliar with the Poisson distribution before this lab, and became fairly comfortable with it by the end. I also got a bit more comfortable with the Gaussian distribution and using it. I did struggle with Matlab at first, but I think I got a handle on it. And I got some nice figures out of Matlab, with a bit of work, that I'm pleased with. I was a little unclear about how to use the Poisson PMF and Gaussian PDF to estimate the number of events one would expect to see, and still am slightly troubled that I didn't ever integrate anything. # Links to other entries My Wednesday Labs • Poisson Statistics
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9160292148590088, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/52355/what-do-the-j-and-m-stand-for-in-j-m-rangle-for-angular-momentum-in-quant
# What do the $j$ and $m$ stand for in $|j,m\rangle$ for angular momentum in quantum mechanics? I'm assuming it is a jth state with m value as total angular momentum? - Wikipedia discusses quantization of angular momentum e.g. here and here. – Qmechanic♦ Jan 28 at 1:45 – Chris White Jan 28 at 2:13 ## 1 Answer The states $|j,m\rangle$ are simultaneous eigenstates of the total angular momentum squared operator $\mathbf J^2$ and the $z$-component of the total angular momentum operator $J_z$. The letter $j$ is related to the eigenvalue of the operator $\mathbf J^2$, while the letter $m$ gives the eigenvalue of the operator $J_z$. Specifically $$\mathbf J^2|j,m\rangle =\hbar^2 j(j+1)|j,m\rangle, \qquad J_z|j,m\rangle = \hbar \,m|j,m\rangle$$ Given these considerations, $j$ is called the total angular momentum quantum number. Hope that helps! Let me know if you'd like more details. Cheers! - But does J here include spin or not? If it doesn't, isn't it notationally more common to denote the total ang. momentum by $\ell$ and not $j$? I thought $j$ was for the total ang. momentum including spin... – daaxix Jan 29 at 18:30 @daaxix The letter J is generically used for all forms of angular momentum in quantum. What $J_i$ denotes depends on the context. As long as the components of $\mathbf J$ satisfy the angular momentum commutation relations $[J_i, J_j] = i\hbar\epsilon_{ijk}J_k$, one can show that the analysis of its eigenvalues and eigenvectors leads to states labeled in this way. So $J_i$ could denote the components of orbital angular momentum, or spin, or any combination of both, and the notation will carry through. – joshphysics Jan 29 at 18:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8761017322540283, "perplexity_flag": "head"}
http://mathoverflow.net/questions/40689?sort=newest
## What is the angle between two complex vectors? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $x, y\in R^n$ and $x, y$ are nonzero, it is well known $\frac{x^Ty}{\parallel x\parallel_2\parallel y\parallel_2}(\parallel x\parallel_2+\parallel y\parallel_2)\le \parallel x+y\parallel_2$. How to extend this to complex vectors? $arcos\frac{x^Ty}{\parallel x\parallel_2\parallel y\parallel_2}$ is the angle between $x$ and $y$. What is the appropriate definition of the angle between two complex vectors? I know $\frac{\mid x^*y\mid}{\parallel x\parallel_2\parallel y\parallel_2}(\parallel x\parallel_2+\parallel y\parallel_2)\le \parallel x+y\parallel_2$ does not hold generally. - ## 6 Answers Let $x,y$ be two nonzero complex vectors, let $\hat x=x/\|x\|$ and $\hat y=y/\|y\|$, and consider the parabola $$\phi(t)=\|t\hat x+(1-t)\hat y\|^2=1+2(t^2-t)(1-\Re(\hat x \overline{\hat y})).$$ You easily check that $\phi(t)\ge\phi(1/2)$ for all $t$. This gives the inequality $$\|t\hat x+(1-t)\hat y\|\ge \sqrt{\frac{1+\Re(\hat x \overline{\hat y})}2}$$ for all real $t$. This is stronger than your inequality, which can be obtained by choosing $$t=\frac{\|x\|}{\|x\|+\|y\|}$$ at the left hand side, and noticing that $$\sqrt{\frac{1+\sigma}2}\ge\sigma$$ for all real $|\sigma|\le1$ at the right hand side. So yes, the correct extension is using $\Re(x \cdot\overline{y})$ instead of $x\cdot y$. - 1 This is just the cosine of the angle between the two vectors as real vectors. There is a more complex version of the angle between to complex vectors. The correspond to points in $\mathbb{C}P(n-1)$ and span a copy of $\mathbb{C}P(1)$. The copy of $\mathbb{C}P(1)$ is a round sphere of radius $1/2$ in the Fubini study metric. The complex angle between the two vectors is the arc length of the shortest geodesic on $\mathbb{C}P(1)$ joining them. The complex angle takes on values between $0$ and $\pi/2$. – Charlie Frohman Oct 12 2010 at 13:36 My point was to find the complex substitute in the inequality proposed by miwa – Piero D'Ancona Oct 12 2010 at 16:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Dear Reju thank you for that Scharnhorst reference - it has saved me an immense amount of time-wasting in my research! I'm surprised that stuff is not better known ... - K. Scharnhorst, “Angles in complex vector spaces,” Acta Applicandae Math., vol. 69, pp. 95–103, Nov. 2001. and appendix in V. G. Reju, S. N. Koh and I. Y. Soon, “Underdetermined Convolutive Blind Source Separation via Time-Frequency Masking,” IEEE Transactions on Audio, Speech and Language Processing, Vol. 18, NO. 1, Jan. 2010, pp. 101–116. - Actually, the case of complex vector spaces is rather a particular case than an extension, with respect to the case of real vector spaces. Recall that, as a vector space over $\mathbb{R}$, your $\mathbb{C}^n$ is isomorphic to $\mathbb{R}^{2n}$, and that, in terms of the Hermitian form of the former, the standard scalar product of the latter writes $\Re(x\cdot \bar y)$. Generally speaking, the appropriate definition of topological/uniform/metric notions for complex vector spaces is just the same for real vector spaces, seeing the complex vector spaces as real vector spaces by restriction of scalars. So the angle of vectors in $\mathbb{C}^n$ is just the angle in $\mathbb{R}^{2n}$. Sometimes, in the complex version, one also requires some kind of algebraic compatibility with the complex structure (e.g., the definition of a norm for complex VS). - Let $V$ be a Euclidean vector space (in particular $V$ can be a Hermitean vector space considered as a real vector space). According to J.H.C. Whitehead (Manifolds with transversal fields in Euclidean space, Ann Math 73, 154-212) the angle between vector subspaces $V_1$ and $V_2$ of $V$ can be defined as the Hausdorff distance (see e.g. http://en.wikipedia.org/wiki/Hausdorff_distance) between their intersections with the unit sphere. More explicitly, it is $$max(max_{x\in V_1\cap S_0}\angle(x,V_2),max_{x\in V_2\cap S_0}\angle(x,V_1))$$ where $S_0$ is the unit sphere and $\angle(x,W)$ is the angle between $x$ and its orthogonal projection to $W$ (if $x$ is orthogonal to $W$, the angle is set to be $\frac{\pi}{2}$). - 1 It is not easy to determine the angle in terms of entries of $x$ and $y$. – Sunni Oct 1 2010 at 2:42 Probably use inner product Re(x* y), where x* is the conjugate transpose of x ... - Note that with this setup, the "angle" operator is no longer commutative; take care with using this. – J. M. Oct 1 2010 at 0:33 1 @ J.M, it is still commutative, since $Re(x^* y)=Re(y^* x)$. But I don't know whether $\frac{Re x^*y }{\parallel x\parallel_2\parallel y\parallel_2}(\parallel x\parallel_2+\parallel y\parallel_2)\le \parallel x+y\parallel_2$ is true. – Sunni Oct 1 2010 at 1:06 1 J.M.: it may not look commutative, but it is. consider the symmetrized version of the inner product <x,y> = $\frac{1}{2}$(x*y + xy*) = $\frac{1}{2}$(x*y + [x*y]*) = Re(x*y). this inner-product is just the usual inner-product on $\mathbb R ^2$, treating x and y as vectors in $\mathbb R ^2$: x $\to$ (Re(x), Im(x)). so the inequality for $\mathbb R ^2$ continues to hold for $\mathbb C$. – ronaf Oct 1 2010 at 1:28 I missed the $\Re$ on first reading; thanks for pointing it out. :) – J. M. Oct 1 2010 at 3:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211604595184326, "perplexity_flag": "head"}
http://physicspages.com/2011/02/08/harmonic-oscillator-series-solution/
Notes on topics in science ## Harmonic oscillator – series solution Required math: calculus Required physics: harmonic oscillator Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Sec 2.3. The complete Schrödinger equation for the harmonic oscillator potential is $\displaystyle -\frac{\hbar^{2}}{2m}\frac{d^{2}\psi}{dx^{2}}+\frac{1}{2}kx^{2}\psi=E\psi$ To solve this equation, we split the wave function ${\psi}$ into two factors: the first factor is the asymptotic behaviour for large ${x}$, and the second is a function ${f}$ which we have yet to find. To simplify the notation we introduced two auxiliary variables. The independent variable ${y}$ is related to the spatial variable ${x}$ by $\displaystyle y\equiv\sqrt{\frac{m\omega}{\hbar}}x$ and the parameter ${\epsilon}$ is related to the energy ${E}$ by $\displaystyle \epsilon\equiv\frac{2E}{\hbar\omega}$ The parameter ${\omega}$ is the frequency of the oscillator. After analyzing the asymptotic behaviour of the Schrödinger equation for the harmonic oscillator, we write the wave function in the form $\displaystyle \psi(y)=e^{-y^{2}/2}f(y)$ Substituting this back into the Schrödinger equation gives us a differential equation for ${f(y)}$: $\displaystyle \frac{d^{2}f}{dy^{2}}-2y\frac{df}{dy}+(\epsilon-1)f=0\ \ \ \ \ (1)$ If we hurl this equation into mathematical software like Maple, it tells us that the solution involves two forms of Kummer functions, otherwise known as confluent hypergeometric functions of the first and second kinds. Apart from being able to impress your friends in the pub, these terms don’t really help us learn much about the physics. For that we need to solve the differential equation using a power series. The idea is to propose a solution of the form $\displaystyle f(y)=\sum_{j=0}^{\infty}a_{j}y^{j}$ The theory behind Taylor series in elementary calculus assures us that for any ‘reasonable’ function (that is, pretty well any function found in physics), it is possible to write the function as a power series, so we should be able to find such a solution. At this stage, we can’t guarantee that such a solution will tell us much, but it’s worth a try. To use the series, we need to calculate its first two derivatives: | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle \frac{df}{dy}$ | $\displaystyle =$ | $\displaystyle \sum_{j=0}^{\infty}ja_{j}y^{j-1}$ | | $\displaystyle \frac{d^{2}f}{dy^{2}}$ | $\displaystyle =$ | $\displaystyle \sum_{j=0}^{\infty}j(j-1)a_{j}y^{j-2}$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle \sum_{j=0}^{\infty}(j+2)(j+1)a_{j+2}y^{j}$ | The fancy footwork in the last line just relabels the summation index to make it more convenient for the next step, as we’ll see. To convince yourself it is the same series as the line above it, just write out the first 4 or 5 terms in the series and you’ll see it is the same. Notice also that in the first derivative series the first term is zero due to the factor of ${j}$, so we don’t actually get a term with ${y^{-1}}$ in it. We now want to substitute these derivatives back into the differential equation 1 we want to solve. The reason we juggled the summation index in the second derivative is that we want the series in all three terms in the equation to contain ${y^{j}}$ terms rather than ${y}$ to some other power. This makes it easier to group together the terms with equal powers of ${y}$. Doing the substitution, we get: | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle \sum_{j=0}^{\infty}(j+2)(j+1)a_{j+2}y^{j}-2\sum_{j=0}^{\infty}ja_{j}y^{j}+(\epsilon-1)\sum_{j=0}^{\infty}a_{j}y^{j}$ | $\displaystyle =$ | $\displaystyle 0$ | | $\displaystyle \sum_{j=0}^{\infty}[(j+2)(j+1)a_{j+2}-2ja_{j}+(\epsilon-1)a_{j}]y^{j}$ | $\displaystyle =$ | $\displaystyle 0$ | From the mathematics of power series expansions, it is known that any given function’s expansion is unique (the proof takes us too far into pure mathematics so we’ll leave it for now). That means that, having decided on a value for ${\epsilon}$, there is one and only one sequence of ${a_{j}}$s that defines the function ${f(y)}$. So, since ${y}$ can be any value, the only way the above sum can be zero for all values of ${y}$ is if the coefficient of each power of ${y}$ vanishes separately. That is, $\displaystyle (j+2)(j+1)a_{j+2}-2ja_{j}+(\epsilon-1)a_{j}=0$ This, in turn, gives a recursion relation for the coefficients: $\displaystyle a_{j+2}=\frac{2j+1-\epsilon}{(j+1)(j+2)}a_{j}\ \ \ \ \ (2)$ Since we are solving a second order differential equation we would expect to have two arbitrary constants that must be determined by initial conditions and normalization, and we see that since the recursion formula relates every second coefficient, we need to specify both ${a_{0}}$ and ${a_{1}}$ to be able to generate all the coefficients. If we start off with ${a_{0}}$ we get all the even coefficients: | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle a_{2}$ | $\displaystyle =$ | $\displaystyle \frac{1-\epsilon}{2}a_{0}$ | | $\displaystyle a_{4}$ | $\displaystyle =$ | $\displaystyle \frac{5-\epsilon}{12}a_{2}=\frac{(5-\epsilon)(1-\epsilon)}{24}a_{0}$ | | $\displaystyle$ | $\displaystyle ...$ | | There is a similar sequence of calculations for the odd coefficients starting with ${a_{1}}$. That’s about as far as we can go without using some external information to put some conditions on the series. As usual, we require the solution (the original solution, that is, ${\psi}$) to be normalizable. We now know that this solution has the form | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle \psi(y)$ | $\displaystyle =$ | $\displaystyle e^{-y^{2}/2}f(y)$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle e^{-y^{2}/2}\sum_{j=0}^{\infty}a_{j}y^{j}$ | So in order to be normalizable, the series will have to converge to some function that doesn’t expand to infinity as fast as ${e^{y^{2}/2}}$. Otherwise the series term will kill off the negative exponential, and the overall wave function will not tend to zero as ${y}$ goes to infinity. This seems like a difficult condition to check, but let’s have a look at the asymptotic behaviour (for large ${j}$) of the recursion formula 2. | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle a_{j+2}$ | $\displaystyle =$ | $\displaystyle \frac{2j+1-\epsilon}{(j+1)(j+2)}a_{j}$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle \frac{2j+1-\epsilon}{j^{2}+3j+2}a_{j}$ | | $\displaystyle$ | $\displaystyle \sim$ | $\displaystyle \frac{2}{j}a_{j}$ | Thus the ratio of two successive even terms (or two successive odd terms) in the series is $\displaystyle \frac{a_{j+2}y^{j+2}}{a_{j}y^{j}}=\frac{2}{j}y^{2}\ \ \ \ \ (3)$ How does this compare with the series for an exponential function? The Taylor series for ${e^{x^{2}}}$ is | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle e^{x^{2}}$ | $\displaystyle =$ | $\displaystyle \sum_{j=0}^{\infty}\frac{x^{2j}}{j!}$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle \sum_{j\; even}^{\infty}\frac{x^{j}}{(j/2)!}$ | The ratio of two successive terms from this series is $\displaystyle \frac{x^{j+2}/((j+2)/2)!}{x^{j}/(j/2)!}=\frac{2}{j+2}x^{2}$ which for large ${j}$ is essentially the same as relation 3. And this is for only half (either even or odd terms) of the series; the other half will contribute another function of roughly equal size. Thus it looks like the series’ asymptotic behaviour is that of ${e^{y^{2}}}$ so the overall behaviour of the wave function is ${e^{y^{2}}e^{-y^{2}/2}=e^{y^{2}/2}}$, which diverges and is therefore not normalizable. This looks like a serious problem, but there is in fact a way out: if the series terminates after a finite number of terms, then the behaviour is that of a polynomial rather than an exponential, and multiplying any polynomial by ${e^{-y^{2}/2}}$ will always give a normalizable function. So if we can arrange things so that the recursion formula 2 gives ${a_{j+2}=0}$ for some ${j}$, then clearly all further terms will be zero. The condition to be satisfied is therefore | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle 2j+1-\epsilon$ | $\displaystyle =$ | $\displaystyle 0$ | | $\displaystyle \epsilon$ | $\displaystyle =$ | $\displaystyle 2j+1$ | | $\displaystyle E$ | $\displaystyle =$ | $\displaystyle \frac{1}{2}\hbar\omega(2j+1)$ | where ${j}$ is some integer 0, 1, 2, 3, … Note however, that each choice of ${j}$, that is, each choice of where the series terminates, gives a different value for the energy. The lowest possible energy for the harmonic oscillator is when ${j=0}$, and is ${E_{0}=\frac{1}{2}\hbar\omega}$ and the energies increase at regular intervals of ${\hbar\omega}$ so the energy levels are all equally spaced. It is more usual to give the energy formula as $\displaystyle E_{n}=\left(n+\frac{1}{2}\right)\hbar\omega$ with ${n=}$0, 1, 2, 3, 4, … One note of caution here. Once we have chosen an energy level, this fixes the value of ${j}$ at which the series terminates. If ${j}$ is even, then the odd series must be zero right from the start, and vice versa. There is no way of getting both the even and odd series to terminate at some intermediate values in the same solution. So if we choose an even value of ${j}$ we must have ${a_{1}=0}$ to remove all the odd terms from the sum, and conversely if we choose ${j}$ to be odd, we must have ${a_{0}=0}$ to remove all the even terms. The stationary states for the harmonic oscillator are therefore products of polynomials and the exponential factor. The polynomials turn out to be well-studied in mathematics and are known as Hermite polynomials. We will explore their properties in another post. To summarize the behaviour of the quantum harmonic oscillator, we’ll list a few points. 1. The harmonic oscillator potential is parabolic, and goes to infinity at infinite distance, so all states are bound states – there is no energy a particle can have that will allow it to be free. 2. The energies are equally spaced, with spacing ${\hbar\omega}$. 3. The lowest energy is the ground state ${E_{0}=\hbar\omega/2}$, so a particle always has positive, non-zero energy. 4. The stationary states consist of either an even or an odd polynomial function multiplied by ${e^{-y^{2}/2}=e^{-m\omega x^{2}/2\hbar}}$ which is always even. Thus a stationary state is either an even or an odd function of ${y}$ (and hence of ${x}$). 5. The polynomial functions in the stationary states are Hermite polynomials. ### Like this: By , on Tuesday, 8 February 2011 at 11:45, under Physics, Quantum mechanics. Tags: energy levels, harmonic oscillator, Hermite polynomials, series solutions. 14 Comments ### Comments • Peter L. Griffiths  On Saturday, 9 February 2013 at 16:52 ### Trackbacks • By Harmonic oscillator – summary « Physics tutorials on Tuesday, 8 February 2011 at 11:50 [...] that is left over. This latter differential equation can be solved by proposing a solution as an infinite series, in which a sum of powers of is plugged into the differential equation and conditions are imposed [...] • By Harmonic oscillator – asymptotic solution « Physics tutorials on Tuesday, 8 February 2011 at 11:52 [...] No Comments Post a comment or leave a trackback: Trackback URL. « Taylor series Harmonic oscillator – series solution » LikeBe the first to like this [...] • By Hermite differential equation – generating functions « Physics tutorials on Tuesday, 15 February 2011 at 17:17 [...] differential equation shows up during the solution of the Schrödinger equation for the harmonic oscillator. The differential equation can be written [...] • By Harmonic oscillator – Hermite polynomials « Physics tutorials on Wednesday, 16 February 2011 at 10:41 [...] can solve this equation using a power series, and we find that in order for the solution to be normalizable, the power series must contain only [...] • By Legendre equation – Legendre polynomials « Physics tutorials on Tuesday, 8 March 2011 at 17:24 [...] solution technique to try is that of proposing as a power series (much as we did in solving the harmonic oscillator system in quantum mechanics). That is, we try a solution of the [...] • By Index – Physics – Quantum mechanics « Physics tutorials on Tuesday, 29 March 2011 at 14:05 [...] Harmonic oscillator – series solution [...] • By Hydrogen atom – series solution « Physics tutorials on Tuesday, 7 June 2011 at 10:15 [...] task here is to solve 1 by using the same method as for the harmonic oscillator. We propose a solution of the [...] • By Hydrogen atom – radial equation « Physics tutorials on Tuesday, 7 June 2011 at 10:16 [...] asymptotic behaviour of the equation for large and small , factor out this behaviour and then use a series to try to find the solution of what’s [...] • By Hermite polynomials – generation « Physics tutorials on Saturday, 21 July 2012 at 15:26 [...] In the solution of the Schrödinger equation for the harmonic oscillator, we found that the wave function can be expressed as a power series: [...] • By Hermite polynomials – the Rodrigues formula « Physics tutorials on Sunday, 22 July 2012 at 17:10 [...] last equation is the same as that obtained from the Schrödinger equation earlier (with different variable [...] • By Angular momentum: restriction to integer values « Physics tutorials on Friday, 1 February 2013 at 11:13 [...] the operators have the form of the harmonic oscillator hamiltonian, we know that their eigenvalues are . Since , the eigenvalues of must be of form , so since , which must be an [...] • By Electromagnetism in quantum mechanics: example « Physics tutorials on Friday, 1 February 2013 at 12:35 [...] has the same form as the harmonic oscillator equation, so it will have the same solution if we define and follow through the same steps. This will give [...] • By Electromagnetic force law in quantum mechanics « Physics tutorials on Friday, 1 February 2013 at 12:36 [...] has the same form as the harmonic oscillator equation, so it will have the same solution if we define and follow through the same steps. This will give [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 116, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9140956997871399, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/49734/taking-the-second-derivative-of-a-parametric-curve?answertab=active
# Taking the second derivative of a parametric curve I understand that for the parametric equations $$\begin{align*}x&=f(t)\\ y&=g(t)\end{align*}$$ If $F(x)$ is the function with parameter removed then $\displaystyle F'(x) = \frac{\text{d}y}{\text{d}t}\big/\frac{\text{d}x}{\text{d}t}$ But the procedure for taking the second derivative is just described as " replace $y$ with dy/dx " to get $$\frac{\text{d}^2y}{\text{d}x^2}=\frac{\text{d}}{\text{d}x}\left(\frac{\text{d}y}{\text{d}x}\right)=\frac{\left[\frac{\text{d}}{\text{d}t}\left(\frac{\text{d}y}{\text{d}t}\right)\right]}{\left(\frac{\text{d}x}{\text{d}t}\right)}$$ I don't understand the justification for this step. Not at all. But that's all my book says on the matter then it launches in to plugging things in to this formula, and it seems to work well enough, but I don't know why. I often find answers about question on differentials are beyond my level, I'd really like to get this, it'd mean a lot to me if someone could break it down. - @nana: For semantic reasons it would be better if you used `\mathrm` instead of `\text` for setting the d's upright. – t.b. Jul 6 '11 at 2:35 @Theo: Thanks. duly noted...:) – Nana Jul 6 '11 at 2:45 @Nana: By the way, you could save yourself some typing work if you wrote `$\newcommand{\d}{\mathrm{d}}$` $\newcommand{\d}{\mathrm{d}}$ at the beginning of a post. Then writing `$\frac{\d}{\d t}$` gives $\frac{\d}{\d t}$. This would already pay if you had only three differential quotients – t.b. Jul 6 '11 at 2:51 @Theo:wow! didn't know that...Thanks again!! – Nana Jul 6 '11 at 2:58 ## 2 Answers Consider $$\begin{align*} \frac{\text{d}^2y}{\text{d}x^2}&=\frac{\text{d}}{\text{d}x}\left(\frac{\text{d}y}{\text{d}x}\right)\\ &=\frac{\text{d}}{\text{d}t}\left(\frac{\text{d}y}{\text{d}x}\right).\frac{\text{d}t}{\text{d}x}\\ \end{align*}$$ where the last equality is as a result of applying the chain rule. - Their justification is that you can use the same process for $\frac{dy}{dx}$ as for $Y$ since you can now consider $Y_2 = g_2(t) = \frac{dy}{dx}(t)$, that is, you once again have a parametric equation in terms of the parameter $t$, and the parametric equation for $x$ stays the same. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9539039731025696, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/31504/skeleton-of-a-braided-monoidal-category
## Skeleton of a braided monoidal category ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Does every (lax) braided monoidal category have a braided monoidal skeleton? That is, I want to define a (lax) braided monoidal structure on a skeleton so that it is braided monoidal equivalent to the original category. If this is always possible, how does one accomplish it? An extreme case of this question (which I most interested in) is this: suppose the original braided monoidal category was strict, in the sense that the associators were trivial. I would like to replace this category with its skeleton, perhaps at the expense of making it nonstrict (generating nontrivial associators). - It is similar to the question mathoverflow.net/questions/11674/… Are you sure that Positselski's answer does not answer your question negatively as well? – Bugs Bunny Jul 12 2010 at 10:26 I do not think so. I am not looking for a strictification of a braided monoidal category. On the contrary, I want to "destrictify": I want to make the set of objects "smaller", by identifying all isomorphic objects, at the expense of introducing a nontrivial associator. – Anton Kapustin Jul 12 2010 at 15:39 ## 3 Answers Yes. This follows from two simple facts: 1. If $F:C \simeq D$ is an equivalence of categories, and $D$ has a braided monoidal structure, then there exists a braided monoidal structure on $C$ and an enhancement of $F$ to a braided monoidal functor such that $F$ is an equivalence of braided monoidal categories. 2. Every category is equivalent to a skeletal category. Number 2 is standard, so I won't dwell on it. If you don't believe number 1, try proving it. It is a great exercise. Here is a sketch of how to go about proving it. 1. Replace $F$ by an adjoint equivalence, i.e. pick an adjoint inverse equivalence to $F$. You'll need this to transfer the structure from D to C. 2. Now transfer the structure from D to C. There is not much to say here. Just follow your nose. For example the tensor product in C is defined by $$x \otimes_C y := G(F(x) \otimes_D F(y))$$ There is an associator because we have chosen F and G to be adjoint equivalences to each other. The braiding is similar. It is given by, $$G(\beta_{F(x), F(Y)}): G(F(x) \otimes_DF(y)) \to G(F(y) \otimes_DF(x)).$$ It is tedious to verify, but this does actually satisfy the hexagon identities. 3. Thus we've constructed a braided monoidal structure on C. To show that this new braided monoidal category C is equivalent as a braided monoidal category to D, we need to augment F and G to braided monoidal functors. To do this we just keep playing the same game. For example we need an equivalence $$F(x) \otimes_D F(y) \to F(x \otimes_C y) = FG(F(x) \otimes_D F(y)).$$ This is given by the inverse of the unit/counit of the adjoint equivalence between F and G. Constructing the other structure is no different. This is morally the strategy advocated by Benjamin Enriquez. He tried to do this with less then an equivalence and ran into trouble. For the question as stated, you really only need the case that C and D are equivalent. Notice also that this still works when the monoidal structures are just lax. It still produces a strong monoidal equivalence between C and D. - Just a note to say that there is a more general context in which this fact can be placed. Just as an algebra structure for a monad can be transferred across any isomorphism, a pseudoalgebra structure for a 2-monad can be transferred across any (adjoint) equivalence. An "unbiased" structure of braided monoidal category is literally a pseudoalgebra structure for some 2-monad (the strict 2-monad whose strict algebras are braided strict-monoidal categories), so it transfers easily.... – Mike Shulman Jul 10 2011 at 7:05 ... A "biased" structure of braided monoidal category is not literally a pseudoalgebra structure for some 2-monad, but it is a strict algebra structure for some strict 2-monad. And since that strict 2-monad is "flexible" (=cofibrant), any pseudoalgebra structure for it is equivalent to a strict one. Thus, we can transfer such a structure in two steps: first transfer it as a pseudoalgebra structure, then strictify it using flexibility. – Mike Shulman Jul 10 2011 at 7:06 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think the answer will be yes. Assume $D,C$ are categories and $i:D\to C, p:C\to D$ are functors, with $p \circ i=id_D$, and that $C$ is braided monoidal. For $A,B \in Ob(D)$, set $A \otimes B := p(iA \otimes iB)$. Define the braiding $\beta_{A,B}:= p(\beta_{iA,iB})$, $a_{A,B,C}:= p(a_{iA,iB,iC})$. Then I think that $D$ satisfies the braided monoidal axioms. - I think $\beta_{A,B}$ in general does not satisfy the hexagon axiom. Here is an example. Consider the category of Z-graded complex vector spaces, with the strict monoidal structure given by the convolution: $V_x\otimes V_y=V_{x+y}$. Let us define a nontrivial braiding $\beta(V_x,V_y)=\exp(a xy)$ for some complex number $a$. Now let $k$ be an integer and let's modify the category by introducing morphisms between vector spaces in degree $x$ and vector spaces in degree $x+k$, so that nonisomorphic objects live in the range $0\leq x\leq k-1$. For obvious choices of $p$ and $i$, the hexagon fails. – Anton Kapustin Jul 12 2010 at 15:50 In the framework of my answer, I think the hexagon holds since (in the strict case) $(\tilde\beta_{A,C}\otimes id_B)(id_A\otimes\tilde\beta_{B,C})= p(i(\tilde\beta_{A,C})\otimes id_{iB})p(...)= p(\beta_{iA,iC}\otimes id_{iB})p(id_{iA}\otimes\beta_{iB,iC})$ $=p((\beta_{iA,iC}\otimes id_{iB})(id_{iA}\otimes\beta_{iB,iC}))=p(\beta_{iA\otimes iB,iC})=p(\beta_{ip(iA\otimes iB),iC})=p(\beta_{i(A\otimes B),iC})= \tilde\beta_{A\otimes B,C}$. The problem might be in the axioms of a skeleton. Actually the framework is the following. $C$ is a (small) category and $D$ is a full subcategory such that $Ob(D)\to Ob(C)\to Ob(C)/iso$ is a bijection (this induces a map $\pi:Ob C\to Ob D$). In that case we don't have $p,i$ as above. What one can do is introduce $\tilde C$ whose objects are pairs $(X,f)$, where $X\in Ob(C)$ and $f\in Iso_C(X,\pi X)$ and $Hom_{\tilde C}((X,f),(Y,g))= Hom_{C}(X,Y)$. We then have functors $i:D\to \tilde C$ (taking $X$ to $(X,id_X)$) and $p:\tilde C\to D$ (taking $X$ to $\pi X$ and $\phi\in Hom_{\tilde C}((X,f),(Y,g))= Hom_{C}(X,Y)$ to $g\phi f^{-1}$). Then $p\circ i=id_D$, we also have a forgetful functor $\tilde C\to C$. But I don't see how to lift the braided monoidal strcture of $C$ to $\tilde C$ which makes it difficult to apply my proposal. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319765567779541, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/56482/using-frac1ai-epsilon-pv-frac1a-i-pi-deltaa-in-feynman-integrals
# Using $\frac{1}{A+i\epsilon} = PV\frac{1}{A}-i\pi\delta(A)$ in Feynman Integrals Are the following operations O.K.? This is related to the Feynman parameter trick. $$F:= \int_0^1 \mathrm{d}x\int_0^{1-x}\mathrm{d}y \frac{1}{f(x,y)+\mathrm{i}\epsilon}.$$ Now using $$\frac{1}{A+i\epsilon} = PV\frac{1}{A}-i\pi\delta(A),$$ where $PV$ denotes the Cauchy Principal Value, we get (taking only the imaginary part): $$\Im{F} = -\pi \int_0^1 \mathrm{d}x\int_0^{1-x}\mathrm{d}y\, \delta(f(x,y)) .$$ The trouble I got is that the zeros of $f(x,y)$ which I call $y^{\pm}$ seems to be outside integration range and hence the delta should yield zero. BUT here's what's funny: when I ignore all this and just perform the formal calculations (assuming I do it correctly) namely; replacing $\delta(f(x,y))$ with $$\frac{1}{\bigl\vert \partial f/\partial y\bigr\vert_{y=y^{\pm}}}\times(\delta(y-y^-)+\delta(y-y^+)),\ \ \ (1)$$ (where $|\partial f/\partial y|_\pm$ are equal) and assuming that $y^{\pm}\in[0,1-x]$ (which seems to be false) the two deltas just give $1+1 = 2$. Then the result seems to be correct, or at least it agrees with what I have calculated the same thing using a totally different method. Could this all just be a coincidence? I mean shouldn't the deltas produce zero if $y^{\pm}\notin[0,1-x]$, or I'm I using the wrong formula $(1)$? - 2 Actually that's a much better title than we usually get. More informative and specific (to a point) is generally better when it comes to titles. – David Zaslavsky♦ Mar 10 at 23:29 1 Eq. (1) only works if $|\partial f/\partial y|$ is the same for both $y^\pm$, otherwise you can't pull it out of the brackets. Don't know if this helps your problem. – Michael Brown Mar 11 at 0:50 1 – Steve B Mar 11 at 0:56 Sorry i should add that it is the same for both. – The Noob Mar 11 at 10:30 @The Noob: If you would like the community to help solve your apparent paradox, you would have to give the explicit form of $f(x,y)$. Right now, it is hard to conclude anything other than what have already been said by you in the question formulation (v4). – Qmechanic♦ Mar 13 at 17:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293371438980103, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/1847/how-can-one-compute-the-greeks-on-vix-futures
# How can one compute the Greeks on VIX Futures I am guessing the short answer to this question is "use the chain rule and linearity of the derivative," but I am looking for more specific advice on how to compute the derivatives of a VIX futures contract. Roughly speaking, the VIX index represents a rolling weighted position in a bunch of S&P call & put options at different strikes. Generally speaking, one cannot buy and sell spot VIX, but there are VIX futures contracts. I would like to be able to compute the sensitivity of the price of the futures contract to: the price of the underlying (in this case the S&P index), volatility of the underlying, change in volatility of the underlying, the risk-free rate, and time. - ## 3 Answers Short Answer : Futures don't have Greeks Long Answer : Assuming a non strictly mathematical (i.e. false) point of view. Well, having Greeks on VIX Futures is not relevant, VIX value is itself a "Greek" (and Futures don't have Greeks). Sensitivity to • Price of the Underlying : Insensitive (ν = 0) • Volatility of the Underlying : Delta Δ = 1 (to Volatility of S&P Option Combination used to compute VIX) • Change in Volatility of the Underlying : Gamma Γ = 1 • Risk Free Rate : Same than any other kind of future • Time : 0 (Θ = 0) Volatility of VIX should be Vega of S&P Option Combination used to compute VIX Vega of VIX should be Vomma of S&P Option Combination used to compute VIX You can compute the Greeks on the VIX Options, it would be more relevant, but don't expect relationships with the S&P Volatility/Prices, VIX is much more complex than simple plain Volatility of S&P. - 1 The "underlying" in @shabbychef's question is the S&P 500, not spot VIX. Perhaps, then, it would have been more appropriate to call them "Greek-like" exposures, but I believe you did not answer the question. – Tal Fishman Sep 7 '11 at 15:20 Well there is no direct mathematical relationship between VIX and plain S&P500 (the index). There is only relationship between VIX and S&P500 options. – Lliane Sep 7 '11 at 15:57 1 "VIX" is just shorthand for a portfolio of S&P 500 options, so any relationship between S&P 500 and S&P 500 options carries over to VIX. The question is how does this carry over to futures on VIX. I actually think the answer to @shabbychef's question is just to take the weighted average of the Greeks on each option that forms the VIX. I'm not 100% sure this is right, so not posting as an answer. – Tal Fishman Sep 7 '11 at 16:11 VIX is calculated from a basket of SPX options, and VIX futures expire into following expiration, e.g. September VIX futures that will expire next Wednesday will use SPX October options chain to calculate settlement value. If $B$ is the value of the basket then VIX value at expiration is $\sqrt{ B }$. Then VIX futures price is the expectation of the basket $VIX _{F} = E[\sqrt{ B }]$. Delta of the VIX futures price with respect to the basket would be $$\ \frac{\partial VIX _{F}}{\partial B} = \frac{\partial E[\sqrt{ B }]}{\partial B}$$ As you can see that taking that expectation is not simple, since there is no simple connection between VIX futures greeks and SPX options greeks because of the expectation and square root. So "use the chain rule and linearity of the derivative" approach would not get you anywhere. But that does not mean that such derivative is 0. Such derivative can be calculated in Malliavin sense, but that is probably not what you're looking for. - hmm. I think I get it. Although the expectation is of the future value of the basket, and the derivative is with respect to spot basket value, right? In that case, I think I can use the 'Delta method' (after genuflecting before some regularity conditions) and get a good approximation. (Well, I view Delta method as Taylor's theorem plus expectation magic.) – shabbychef Sep 20 '11 at 22:55 I would calculate implied greeks based on the expectations for the underlying S&P option basket. Unfortunately, these are just expectations as I have a database of 17 years of S&P options pricing that will tell you your expectations are likely wrong more often than not. If you had a pricing model that had these expectations built in, I think the resulting greeks would be as correct as your pricing model. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382639527320862, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/123131/motivation-behind-the-definition-of-flat-module?answertab=active
Motivation behind the definition of flat module Can someone explain what is the motivation behind the definition of a flat module? I saw the definition but I don't really know why it is important to work with these structures. - Tensor products conserve almost every property you could ask for. Tensors with flat modules really preserve almost anything you could want. And it turns out that lots of things are flat - for example, over PIDs, flat and torsion free are the same thing. – mixedmath♦ Mar 22 '12 at 3:08 1 Are you convinced that tensors products are useful ? If you are, then you might want them to behave nicely. – Joel Cohen Mar 22 '12 at 9:07 @JoelCohen What do you mean by "useful" and behave nicely? – Jr. Mar 23 '12 at 0:27 2 Answers Flatness in commutative algebra satisfies a geometric condition: the fibers of a morphism between two varieties (schemes) don't vary too wildly. A flat morphism $f \colon X \to Y$ of varieties (schemes) can be thought as a continuous family of varieties (schemes) $\{ f^{-1} (y) \}_{y \in Y}$. An important theorem says that if $f \colon X \to Y$ is a flat morphism between two irreducible varieties then its fibers have dimensions equal to $\dim X - \dim Y$. For example, fix an algebraically closed field $k$, and consider the morphism $f \colon \mathbb{A}^1 \to \mathbb{A}^1$ defined by $x \mapsto x^2$. It corresponds to the ring homomorphism $k[x^2] \hookrightarrow k[x]$. You should be able to prove that $k[x]$ is a flat $k[x^2]$-algebra (it is also free), hence the morphism $f$ is flat. The fibers are almost always made up of two points. Consider the morphism $g \colon \mathbb{A}^2 \to \mathbb{A}^2$ defined by $(x,y) \mapsto (x, xy)$ (it is an affine chart of the blowing up of the plane). It corresponds to the ring homomorphism $k[x,xy] \hookrightarrow k[x,y]$. The fiber of the point $(0,0)$ is the line $\{ x= 0 \}$ which has dimension $1$, while the fiber of the other points is empty or made up of a single point. You should be able to check that $k[x,y]$ is not flat as $k[x,xy]$-algebra. - You could make this more precise: for example, you might mention that "a finitely-generated module over a noetherian integral domain is flat if and only if it is locally free of constant rank". But personally I think flatness is a concept from homological algebra and should be distinguished from local-freeness. – Zhen Lin Apr 22 '12 at 0:10 According to the answers to this MO question, flatness should really be thought of as an algebraic condition rather than a geometric one. Mumford [The red book of varieties and schemes, Ch. III, § 10] writes, The concept of flatness is a riddle that comes out of algebra, but which technically is the answer to many prayers. For the purposes of homological algebra, it is incredibly useful for a functor to preserve exact sequences, and flatness of a (right) $R$-module $M$ is precisely what is needed to make the functor $M \otimes_R -$ send exact sequences of (left) $R$-modules to exact sequences of abelian groups. For example, if $I$ is a (left) ideal of $R$, then we have an exact sequence of (left) $R$-modules: $$0 \longrightarrow I \longrightarrow R \longrightarrow R / I \longrightarrow 0$$ In general, the tensored sequence is only right exact: $$M \otimes_R I \longrightarrow M \longrightarrow M \otimes_R R / I \longrightarrow 0$$ It's not hard to see that the image of $M \otimes_R I \to M$ is the abelian group $M I$, so when $M$ is flat, the natural map $M \otimes_R I \to M I$ is an isomorphism. Conversely, if this holds for every (left) ideal $I$, then $M$ is flat. Morally, what this is saying is that a flat $R$-module has no "generalised" $R$-torsion; if $R$ is a principal ideal domain, then an $R$-module is flat if and only if it has no $R$-torsion in the ordinary sense. (Think of $M \otimes_R I$ as being a group of formal linear combinations of formal products; one possible way $M \otimes_R I \to M I$ could fail to be an isomorphism is if $m \cdot a = 0$ for some non-zero $m$ in $M$ and non-zero $a$ in $I$.) Under certain circumstances, flatness coincides with other conditions which are more geometric in nature. For example, if $R$ is a commutative noetherian ring, then the following are equivalent for a finitely-generated $R$-module $M$: • $M$ is projective. • $M$ is flat. • $M$ is locally free, in the sense that there are $f_1, \ldots, f_n$ in $R$ such that $f_1 + \cdots + f_n = 1$ and each $M \otimes_R R[1 / f_i]$ is free. Actually, we always have the implication projective $\Rightarrow$ flat, even when $M$ is not finitely generated and $R$ is non-commutative/non-noetherian (just so long as the axiom of choice holds!), but the commutative noetherian case is what is usual in algebraic geometry. Being locally free means that $M$ is the module of global sections of a finite-dimensional vector bundle $\tilde{M}$ over $\operatorname{Spec} R$. It is reasonably clear that the rank of $\tilde{M}$ is locally constant on $\operatorname{Spec} R$. We have a partial converse: if $R$ is a noetherian integral domain and the rank of finitely-generated module $M$ is constant on $\operatorname{Spec} R$, then $M$ is locally free and hence flat. - Mumford's words are really nice! – Andrea Apr 22 '12 at 9:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461315870285034, "perplexity_flag": "head"}
http://mathoverflow.net/questions/71885?sort=newest
## Units in cyclotomic fields ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $q$ and $r$ be distinct prime numbers. I noticed (computing a few cases) that $\zeta_{2q} + \zeta_{2q}^{-1} + \zeta_{2r} + \zeta_{2r}^{-1}$ is a unit (in $\mathbb{Z}[\zeta_{2qr}]$, say). Is this always true? Why is that? - Somewhat related: math.stackexchange.com/questions/3185/… – Chandrasekhar Aug 2 2011 at 14:41 ## 2 Answers I assume you want $q$ and $r$ to be odd primes. Also, note that I will be using the notation that $\zeta_m$ means an arbitrary primitive $m$-th root of unity (but the same one every time it appears in an equation), and will be proving the statement in that generality. Lemma: For any odd $m>1$ and any $\zeta_m$, the number $\zeta_m+1$ is a unit. Proof: Let $r$ be such that $m | 2^r-1$. We'll abbreviate $\zeta_m$ to $\zeta$. Then $\zeta^{2^r} = \zeta$ so `$$1 = \left( \frac{\zeta^{2}-1}{\zeta -1} \right) \left( \frac{\zeta^{4}-1}{\zeta^{2} -1} \right) \cdots \left( \frac{\zeta^{2^r}-1}{\zeta^{2^{r-1}} -1} \right)=$$` `$$\left( \zeta+1 \right) \left( \zeta^{2} + 1 \right) \cdots \left( \zeta^{2^{r-1}} +1 \right),$$` exhibiting an explicit inverse for $\zeta+1$. Let $\eta$ be a primitive $2qr$ root of unity. Then your proposed unit is $\eta^{r}+\eta^{-r} + \eta^q + \eta^{-q}$ and factors as `$$\eta^r (1+\eta^{q-r})(1+\eta^{-q-r}).$$` Since $q$ and $r$ are odd and relatively prime, $\eta^{q-r}$ and $\eta^{q+r}$ are primitive $qr$-th roots of unity and we are done by the lemma. - 3 As a rule of thumb, the only easy way to find get units in cyclotomic fields is to play with ratios of the form $(\eta^a-1)/(\eta-1)$ where $a$ is relatively prime to the order of $\eta$, so it would be very surprising if an argument like this didn't work. – David Speyer Aug 2 2011 at 15:31 Thanks! I guess I should have tried to play with the expression a little more... – expmat Aug 2 2011 at 16:09 2 @David: how precise can that rule of thumb be made? Is it known for what polynomials $P$ it's true that $P(\zeta_n)$ is always a unit for $n$ not contained in a union of arithmetic progressions, or something like that? – Qiaochu Yuan Aug 3 2011 at 1:38 @Qiaochu: Great question! I don't know. – David Speyer Aug 3 2011 at 2:42 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The answer is also yes if one of the primes, say $r$, is $2$, because then $\zeta_{2r}+\zeta_{2r}^{-1}=0$ and $\zeta_{2q}+\zeta_{2q}^{-1}=\zeta_{2q}(1+\zeta_{2q}^{-2})$ is a unit (as $\zeta_{2q}^{-2}$ is a primitive $q$th root of $1$ and $q$ is an odd prime). Edit: (1) Note that if both primes are odd then $\zeta_q+\zeta_q^{-1}+\zeta_r+\zeta_r^{-1}$ is also a unit. Indeed, $-\zeta_q$ is a primitive $2q$th root of $1$ (this relies on $q$ being odd), so let's call it $\zeta_{2q}$, and likewise for $r$. Then $-(\zeta_q+\zeta_q^{-1}+\zeta_r+\zeta_r^{-1})=\zeta_{2q}+\zeta_{2q}^{-1}+\zeta_{2r}+\zeta_{2r}^{-1}$, and we know that the latter is a unit. (2) Note also that if one of the primes, say $r$, is $2$ then $\zeta_q+\zeta_q^{-1}+\zeta_r+\zeta_r^{-1}$ is not a unit: it is equal to $\zeta_q+\zeta_q^{-1}-2$, so it is a unit times the square of $\zeta_q-1$, but the latter (unlike $\zeta_q +1$) is not a unit because it goes to $0$ under the unique ring homomorphism $\mathbb Z [\zeta_q]\to \mathbb Z/q$, which takes $\zeta_q$ to $1$. - Sorry, but I don't understand your second paragraph... – darij grinberg Aug 2 2011 at 17:10 @darij: I have re-edited for clarity and details. – Tom Goodwillie Aug 2 2011 at 18:30 I still don't get it. Your first paragraph proves it for $r=2$, not for $r$ being even... but I assume that I am misunderstanding you on a more general scale. – darij grinberg Aug 3 2011 at 11:16 In the question $q$ and $r$ are primes. David Speyer answered it when neither of them is $2$. I answer it when one of them is $2$. I also point out that a related statement is true when neither of them is $2$ but false when one of them is $2$. – Tom Goodwillie Aug 3 2011 at 12:33 Ah! I thought your post was supposed to be independent of David's. Everything is clear now. – darij grinberg Aug 3 2011 at 14:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478960633277893, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/35731/what-is-the-estimation-bias-of-the-top-estimate-in-a-list-sorted-by-value
# What is the estimation bias of the top estimate in a list sorted by value? Let's make the problem as simple as possible. Assume two related random variables, $X_1$ and $X_2$. On the basis of some data we estimate their true means $\mu_{X_1}$ and $\mu_{X_2}$ by sample means $\hat\mu_{X_1}$ and $\hat\mu_{X_2}$. These estimates are unbiased. But now let's sort our two random variables by their sample means and look at the variable with the highest sample mean. Now for this top-of-the-list random variable the sample mean is now a biased estimator of its true mean (under some reasonable assumptions, e.g. that the means of these random variables are themselves distributed in a certain way and that distribution has a mean) -- that's easy to verify by Monte-Carlo. For obviousness, take not two but a thousand random variables and make their true means similar. The question is what is this bias and how do I analytically calculate it? I'd also appreciate some conceptual discussion on how does estimation bias arise out of sorting by estimated values. - 3 I don't follow the procedure. In the simple case, when you "sort ... by their sample means" you merely identify which of $X_1$ and $X_2$ has the higher sample mean. This does not change either $\mu_{X_1}$ nor $\mu_{X_2}$, whence they must remain unbiased. I sounds like you're doing something additional which is not clearly described in the question. – whuber♦ Sep 5 '12 at 18:12 1 Try a Monte-Carlo :-) Take a thousand random variables with their means ~N(0,1), generate sample data (say, 100 data points per variable), calculate sample means, sort the variables by sample means, take the top variable and see whether its sample mean is an unbiased estimator of its true mean. Intuitively, realizations with large positive estimation errors will gravitate towards the top of the sorted list. – Baloo Sep 5 '12 at 18:22 A paper you may be interested in is P. Hall and H. Miller, Modeling the variability of rankings, Ann. Statist., vol. 38, no. 5 (2010), 2652-2677. There the question is a bit different; instead of asking about bias, they are interested in the number of entities which are correctly sorted based on noisy observations. The model is quite similar to the one you propose, except that they allow dependence in the noise between the entities observed. At any rate, the focus is different, but may be a useful connection to the literature. – cardinal Sep 5 '12 at 18:43 The language in your question is a bit fast and loose, which I suspect is what led @whuber to interpret your question differently than you intended. – cardinal Sep 5 '12 at 18:45 @cardinal Thank you, connections to the literature are very useful. I am sure I am not the first one to concern myself with this issue. – Baloo Sep 5 '12 at 18:59 show 4 more comments ## 1 Answer Let $X_{in}$ be i.i.d. copies of $X_i$ and let $Y_i$ denote the $i^{th}$ sample mean $$Y_i = \frac{1}{N_i} \sum_{n=1}^{N_i} X_{in} \enspace,$$ and let $F_i$ denote its cumulative distribution function (CDF). Note that $Y_i$ and $F_i$ are fully defined when we have the distribution of the random variable $X_i$ and the number of samples for this variable. Then, the probability that the highest sample mean is smaller then some value $x$ is equal to the probability that all sample means are smaller than $x$, and therefore its distribution is defined by: $$F_{\max}(x) = P( \max_i Y_i < x ) = \prod_{i=1}^M P( Y_i < x ) = \prod_{i=1}^M F_i(x) \enspace,$$ where $N_i$ is the number of samples for the $i^{th}$ random variable, and $M$ is the number of random variables. Some work has been done to determine the bias of the maximum sample average to the actual maximum mean: $$E\left\{ \max_i Y_i \right\} - \max_i E \{ X_i \} \enspace.\tag{1}$$ See, for instance, this paper for some bounds. So why does this bias occur? The intuitive reason is that when you select the highest sample mean, you are more likely to select an overestimated mean than you are to select an underestimated mean. More formally, it is a direct consequence of Jensen's inequality, that states $E f(X) > f( E X )$ for any strictly convex $f$ (note that $\max$ is a convex operator). You ask about a different, but related, bias: the difference between the maximum sample mean and the true mean of the corresponding random variable that has yielded the maximum sample average. This corresponds to: $$\sum_{i=1}^M P\left( Y_i = \max_j Y_j \right) E \left\{ Y_i - E \left\{ X_i \right\} \middle| Y_i = \max_j Y_j \right\} \enspace,$$ which can be rewritten as $$E \left\{ \max_i Y_i \right\} - \sum_{i=1}^M P\left( Y_i = \max_j Y_j \right) E \left\{ X_i \right\} \enspace.$$ I don't know of previous work on this particular bias, but it is easy to show that this bias is lower bounded by the bias in $(1)$ (since a weighted sum is always smaller than the maximum). In general, the resulting bias may not have a very 'pleasant' analytical form, but it is easy enough to approximate numerically. - Hi MLS, This is a good start. A couple notes: (a) Though I see this rather often for these kinds of arguments, there is no need to appeal to (a generalized form of) Jensen's inequality in the first part. Note that monotonicity of expectation is already enough. (b) I don't believe you can drop the conditioning in the second term of your last equation. For example, if the latent $X_i$ are iid, your second term just reduces to the unconditional mean, which can't be correct. :-) – cardinal Sep 6 '12 at 14:29 Also, just a minor typo, but the first equation should use a double index, i.e., $X_{in}$. – cardinal Sep 6 '12 at 14:45 @cardinal Thanks for the comments! (a) Is monotonicity really enough? I may be missing the point, but $\log$ is strictly monotonic, but $\log \max_i x_i = \max_i \log x_i$, for any $x_i$. To get an inequality, I thought we would need to use the convexity of $\max$ and Jensen's inequality. What are your thoughts on this? (b) I changed the equation before the last, since I believe this is what the question is asking about. The last equation can then stay as is was, I think. I indeed see no problems with it reducing to the unconditional mean when all $X_i$ are i.i.d. – MLS Sep 6 '12 at 14:56 Hi, MLS. For (a), note that $Y_i \leq \max_j Y_j$ for each $i$, so clearly from monotonicity of expectation $\max_i \mathbb E Y_i \leq \mathbb E \max_j Y_j$; this works if we replace $\max$ with $\sup$ of an infinite set, as well, without modification. For (b), I agree that your last equation now (after your edit) follows from the penultimate one, but I don't think that's the quantity of interest. (Relating your notation back to the question of the OP, $F_i$ is a random distribution function, not a fixed one, so I think $\mathbb E X_i$ is really a random variable, in spite of the notation.) – cardinal Sep 6 '12 at 16:50 1 If you are doing this for some concrete data set, you should think about bootstrapping the bias. – kjetil b halvorsen Dec 5 '12 at 18:44 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520295858383179, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/6162/list
## Return to Answer 3 TeXified It is easier to take the derivative, and consider the volume of the n-1-sphere (i.e., the "surface area" of the boundary of the ball). Start with the integral of e-x_1^2 $\int_{\mathbb{R}^n} e^{-x_1^2 - ... - x_n^2 over Rn} dx_1 \dots dx_n$. Fubini's theorem lets you decompose this into a product of 1-dimensional integrals, and you get pin/2. $\pi^{n/2}$. Since the integrand is spherically symmetric, you can change to an integral of the volume of integral $\int_0^\infty vol(S^{n-1}(r)) e^{-r^2} dr$, where $S^{n-1}(r)$ is the unit n-1-sphere of radius rtimes e-r^2 dr. Solving for The volume of this sphere is $r^{n-1}$ times the volume of the unit sphere, so solving for that, you get a power of pi divided by the integral of rn-1 e-r^2 dr from 0 to infinity$\frac{\pi^{n/2}}{\int_0^\infty r^{n-1} e^{-r^2} dr}$. A change of coordinates (u = r2) in the denominator yields the integral defining Gamma(n/2) .$\Gamma(n/2)$. 2 added 70 characters in body; added 4 characters in body It is easier to take the derivative, and consider the volume of the n-1-sphere. Start with the integral of e-x_1^2 + - ... _ - x_n^2 over Rn. Fubini yields an answer Fubini's theorem lets you decompose this into a product of 1-dimensional integrals, and you get pin/2. Since the integrand is spherically symmetric, you can change to an integral of the volume of the n-1-sphere of radius r times e-r^2 dr. Solving for the volume of the unit sphere, you get a power of pi divided by the integral of rn-1 e-r^2 dr from 0 to infinity. A change of coordinates (u = r2) yields the integral defining Gamma(n/2) . 1 It is easier to take the derivative, and consider the volume of the n-1-sphere. Start with the integral of ex_1^2 + ... _ x_n^2 over Rn. Fubini yields an answer of pin/2. Since the integrand is spherically symmetric, you can change to an integral of the volume of the n-1-sphere of radius r times e-r^2 dr. Solving for the volume of the unit sphere, you get a power of pi divided by integral of rn-1 e-r^2 dr from 0 to infinity. A change of coordinates (u = r2) yields the integral defining Gamma(n/2) .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8850704431533813, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/52775/help-with-a-derivative
# Help with a derivative I need to take the derivative of the following function w.r.t. $x$. (This is the General Beta of the Second Kind density function.) $$\frac{a}{bB(p,q)}\frac{(x/b)^{ap-1}}{(1+(x/b)^a)^{p+q}}$$ $B(p,q)$ is the beta function. Don't need the answer per se, but would appreciate some advice on what strategy I should use, especially as regards the derivative of the beta function. Thanks - 1 Please clarify: Are the variables $a, b, p, q$ independent of $x$? If so, then this question is not as intimidating as it may first seem. For example, I don't even need to know what $B(p, q)$ is to know that it is constant with respect to $x$. – Shaun Ault Jul 21 '11 at 0:09 Just do it the normal way; or take the log of your expression and calculate its derivative. – Apprentice Queue Jul 21 '11 at 0:34 @Ben: You can disregard the first fraction as a constant multiple and the second fraction you can use the power rule after doing some simplifying. – night owl Jul 21 '11 at 0:47 The ugly (or attractive) thing in front is just a constant, to make the "total probability" equal to $1$. There is less to this problem than meets the eye! – André Nicolas Jul 21 '11 at 14:53 @ben: I posted an answer assuming that $p,q$ are independent of $x$. Is that so? If you confirm I will undelete my answer. – Américo Tavares Jul 21 '11 at 16:53 show 1 more comment ## 1 Answer Assuming that everything is real-valued and that a, b, p, and q are all independent of x, then we consider the following problem. $$\left( K\frac{x^{ap-1} }{(1 + (x/b)^a)^{p + q}} \right)'$$ where $K = \dfrac{a}{bB(p,q) \cdot b^{ap-1}}$, i.e. constant to x. This is not such a bad problem, as it's just a composition of various functions that we learn how to differentiate in an intro calc class. But it's not very fun looking, and it's a bit messy. But keep track of the factors and plug along. Now it's an application of either the quotient rule or the product rule - one's choice. I choose the product rule today. So we note the following: $(x^{ap-1})' = (ap-1)x^{ap-2}$ and $(\;(1 + \frac{x^a}{b^a})^{-p-q}\;)'$ which, remembering the chain rule, becomes $(-p-q) (1 + \frac{x^a}{b^a})^{-p-q-1} \cdot \frac{ax^{a-1} }{b^a}$. Putting these together, one gets $$K \left( \frac{(ap-1)x^{ap-2}}{(1 + \frac{x^a}{b^a})^{p+q}} + \frac{x^{ap-1} \cdot -(p+q) (\frac{ax^{a-1}}{b^a})}{(1 + \frac{x^a}{b^a})^{p+q+1}} \right)$$ And I have done my best to put factors in the order in which they appear from using the chain, product, and power rules. - Yes x is independent of the parameters a,b,p,q and everything is real valued. Thanks everybody! Unfortunately I just realized I also need to take the derivative with respect to each of the parameters! – ben Jul 22 '11 at 6:14 @ben: you need to find 5 different derivatives? A total derivative? What's the overall goal? – mixedmath♦ Jul 22 '11 at 7:19 The overall goal is to fit this model to a dataset via minimizing the sum of squared errors. Hence I need to know the derivative of the model with respect to the four parameters a,b,p,q (so that I can subsequently do a Newton-Raphson method to solve for the minimizing parameter values) – ben Aug 15 '11 at 0:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369668364524841, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/36079/unit-of-torque-with-radians/36090
# Unit of torque with radians? Usually, the angular frequency $\omega$ is given in $\mathrm{1/s}$. I find it more consistent to give it in $\mathrm{rad/s}$. For the angular momentum $L$ is then given in $\mathrm{rad \cdot kg \cdot m^2 / s}$. However, the relation for torque $\tau$ says: $$\tau \cdot t = L$$ So the torque should not be measured in $\mathrm{N \cdot m}$ but $\mathrm{rad \cdot N \cdot m}$. Would that then be completely consistent? - – Qmechanic♦ Sep 10 '12 at 19:34 ## 2 Answers OP wrote(v1): So the torque should not be measured in N⋅m but rad⋅N⋅m. Would that then be completely consistent? No, that would not be consistent with the elementary definition of torque $\vec{\tau}=\vec{r} \times \vec{F}$ as a cross-product between a position vector $\vec{r}$ and a force vector $\vec{F}$. An angle in radians is the ratio between the length of a circle arc and its radius, and is therefore dimensionless. For instance, the angular version $\tau = I \alpha$ of Newton's 2nd law is only true (without an extra conversion factor) if the angle behind the angular acceleration $\alpha$ is measured in radians. However, it should be mentioned that due to the formula $$W~=~\int \tau ~d\theta,$$ for angular work, torque can be viewed as energy per angle, i.e., the SI unit of torque is also Joules per radians. See also this Wikipedia page and this Phys.SE question. - 1 Where can I get another $\mathrm{rad}$ from? Or is that the reason one does not use $\mathrm{rad}$ in those contexts? – queueoverflow Sep 10 '12 at 15:39 You would have to put a conversion coefficient with a value of one radian in front of $\vec{r}\times\vec{F}$. It would be possible to reformulate all the equations of physics in this way to explicitly include radians, but it would make things messier than they are. – David Zaslavsky♦ Sep 10 '12 at 18:08 @queueoverflow: $\mathrm{rad}$ is not a unit like meter or second. It is basically $1$. You can multiply anything with $1$ or $\mathrm{rad}$ without changing its meaning. My advice is never to use $\mathrm{rad}$. It is more confusing than helpful. – C.R. Sep 11 '12 at 1:20 @DavidZaslavsky: Okay, that makes completely sense. – queueoverflow Oct 18 '12 at 15:17 Anthony French of MIT, in a private communication to me years ago, finally got me to understand when to write radians as a unit and when to omit it. Here is the answer. If the quantity in question has a numerical value that depends on whether the angular unit is expressed in degrees, radians, revolutions, or something similar, then explicitly include the appropriate unit. If the quantity's numerical value does NOT depend on the angular unit, then omit the angular unit. As an example, consider angular velocity and linear velocity. Angular velocity's numerical value depends on whether one uses degrees or radians. $50\; \circ/s$ isn't the same as $50\; rad/s$. Linear velocity, though, has a numerical value that is independent of any angular unit so when we calculate $v = \omega r$ we never write $\frac{rad \cdot m}{s}$ as the unit. We simply write $m/s$. - I'd say the last example only works because there is an implicit 1/rad on the right side that converts radius to circumfence. – queueoverflow Sep 10 '12 at 17:22 There is only one rad on the right hand side, and it appears explicitly in the unit of $\omega$. The resulting product, linear velocity, has a value that can be measured with only a calibrated stick and a clock, with no regard for angular units. – user11266 Sep 10 '12 at 19:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275896549224854, "perplexity_flag": "middle"}
http://nrich.maths.org/1832/solution
### Fraction and Percentage Card Game Match the cards of the same value. # Tiny Nines ##### Stage: 4 Challenge Level: Good solutions to this were sent in by Chen of The Chinese High School, Singapore, Anders and Sammy of Bentley Park College, Rebecca of Henry Box School and Andrei of School 205, Bucharest, Romania. Rolf sent in a very nicegeneralisation. Well done all of you! "I observed the pattern, then I proved it. Writing the decimal expansions we get recurring decimals and I use the brackets to denote that the set of digits inside the brackets is repeated over and over indefinitely. $1/9 = 0.11111\dots = 0.(1)$ $1/99 = 0.010101\dots = 0.(01)$ $1/999 = 0.001001001\dots = 0.(001)$ $1/9999 = 0.000100010001\dots = 0.(0001)$ and so on. The general pattern is: $$\frac{1}{(\underbrace{9\dots9}_{n})} = 0.(\underbrace{00\dots0}_{n-1}1)$$ Now, I have to prove it. I multiply the number by $10^n$ $$a = 0.(\underbrace{00\dots0}_{n-1}1)$$ $$10^{n}a = 1.(\underbrace{00\dots0}_{n-1}1)$$ Now, I subtract the first expression from the second one: $$(10^{n} - 1)a = 1$$ $$a = \frac{1}{10^{n} - 1} = \frac{1}{\underbrace{99\dots9}_{n}}$$ So, I have proved that I am right.'' Here is Rolf's generalization. "A generalization can be stated as follows: Given a number $(a_1)(a_2)(a_3)(a_4)\dots(a_n)$ where each $(a_i)$ corresponds to the $i^{th}$ digit of the number, we can create an infinitely repeating decimal of the form: $0.a_{1}a_{2}a_{3}a_{4}\dots{a_{n}}a_{1}a_{2}a_{3}a_{4}\dots{a_{n}}\dots$ by writing it as a fraction with repeated nines as the denominator, that is $0.a_{1}a_{2}a_{3}a_{4}\dots{a_{n}}$ (a sequence of $n \; 9$'s). A proof of this is given for the sequence $0.234523452345\ldots$ Let $k= 0.234523452345 \ldots$ Then $10000k = 2345.23452345 \ldots$ and $9999k= 2345$. Hence $k = 2345/9999$ as desired. This proof can clearly be modified to prove the generalization for any sequence of $(a_{i})$s.'' The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9250282645225525, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/21028/second-law-of-thermodynamics-why-is-it-only-almost-always-true-that-entropy-i/21036
# Second law of Thermodynamics: Why is it only “almost” always true that entropy is non-decreasing? The venerable Wikipedia states the second law of Thermodynamics as such: ````...that the entropy of any closed system not in thermal equilibrium almost always increases. ```` Er, "almost"? Of course I understand that the second law of thermodynamics is based up the statistical unlikelyhood of fast-moving molecules to aggregate, however, if the situations is in fact "almost" then why is the phenomenon stated as a law? Is it only because we have not yet observed the unlikely aggregation? - 3 Maybe they're implying that it can stay constant. Added a [[citation needed]]. – Manishearth♦ Feb 15 '12 at 11:14 It is a law for the classical thermodynamics system. Probabilities belong to statistical mechanics and statistical thermodynamics is a way of "explaining" why and how the classical thermodynamics work.Each is a different self contained mathematical system. – anna v Feb 15 '12 at 11:55 @Anna: If the law pertains to classical thermodynamics however the "almost" pertains to statistical, then in my opinion the word "almost" should be removed as it refers to a different field. Why conflate the two? – dotancohen Feb 15 '12 at 12:39 – Qmechanic♦ Feb 15 '12 at 12:45 My book of classical thermodynamics has no "almost" for the second law. It is the sloppy way of the Wikipedia editor in mixing the two systems. – anna v Feb 15 '12 at 13:15 ## 5 Answers I think the author has used the word "almost" to incorporate the possibility that $\Delta S$ can be equal to zero also. As the exact statement of second law of thermodynamics goes this way, An isolated system evolves in such a way that $\Delta S \geq 0$. This means that an isolated system should evolve in such a way that the multiplicity should remain same or increase. For example, consider a simple system with two macrostates $A$ and $B$, with $4$ and $6$ microstates respectively. If we find the system in mactrostate $A$, it can evolve and remain in same state or can move to macrostate $B$. But, if we find the system initially in macrostate $B$, it will remain in macrostate $B$ and never trasits to macrostate $A$. To my knowledge, no one knows why nature follows this rule!!! - of course we know. statistical thermodynamics tells us that the rule is followed because the probability of disorder increasing is practically 1. It is within the classical thermodynamic system, a mathematically consistent system, that it is an absolute law. – anna v Feb 15 '12 at 15:08 @annav: but traditional statistical mechanics makes some assumptions on the behaviour of physical systems the only real ratification for which is that they lead to it being compatible with thermodynamics. However we can't really reduce them to physical laws independent of thermodynamics. (Maybe we can actually, but I don't think there is a definitive solution yet. FWIW I've just started working on that very matter.) – leftaroundabout Feb 15 '12 at 16:09 Comments to the answer(v1): The word almost is not just included to allow $S$ to stay constant, but also to take into account the small statistical probability that $S$ actually decreases. – Qmechanic♦ Feb 19 '12 at 12:58 That Nature follows the laws of thermodynamics is a well-known consequence of statistical mechanics and the idealization that the particle number can be taken as infinite. the last assumption eliminates arguments about Poincare recurrence, which are valid only in finite systems. As the universe may well contain infinitely many particles, this assumption is perhaps even true in a literal sense, rather than an approximation only. – Arnold Neumaier Mar 2 '12 at 16:05 Well, not every application of statistical mechanics considers the whole universe. In principle, one may monitor the entropy of a very small finite system, and in such system, there could be a non-zero chance that the entropy decreases once in a while, so OP's question is not just academic. – Qmechanic♦ Mar 2 '12 at 18:26 show 2 more comments You say: Of course I understand that the second law of thermodynamics is based up the statistical unlikelyhood of fast-moving molecules to aggregate and the word "almost" just means "statistical unlikelyhood". I suppose calling the second law a "law" is a matter for debate. Statistical thermodynamics gives you (in principle) a way of precisely calculating the probability that the entropy will increase, so even though the statement "will almost always increase" sounds vague it can be made as precise as you want. It seems to me that this justifies calling the second law a "law". - How about "unlikely to decrease" rather than "almost always increase". – dotancohen Feb 19 '12 at 15:34 ''the entropy of any closed system not in thermal equilibrium almost always increases'' This doesn't (or at least shoudn't) refer to probability, which is a concept completely foreign to thermodynamics. Thermodynamics assumes (from a statistical mechanics perspective) the thermodynamical limit of infinitely many particles, in which case statistical irregularities are completely absent. If a system is too small for the thermodynamic limit to be a good approximation, thermodynamics no longer apply, so the question of the validity of the laws for such a system is no longer sensible. The whole thermodynamic formalism and terminlolgy breaks down for such a system, not only the second law. (Just as that the fact that the laws of Germany are not applicable in Austria doesn't mean that they are only almost always valid.) It also cannot refer to the fact that entropy is constant in equilibrium, since the statement explicitly assumes a nonequilibrium state. Therefore the statement can sensibly refer only to the fact that there are many systems in nature that are in a metastable state only (see http://en.wikipedia.org/wiki/Metastable_state). Thus they are not in equilibrium but retain their state (and hence don't change their entropy) unless seeded from outside (loss of closure), in which case they suddenly undergo a phase transition. - Thanks, Arnold. I had not considered the possibility of a metastable system, and if that is in fact the reason for the word "almost" then I think that it should be clarified or even omitted. – dotancohen Mar 3 '12 at 13:19 In order to understand the meaning of "almost" you should read the Poincarè recurrence paradox. A gas of particles enclosed in a part of a box naturally expands, incrementing entropy, in the whole box. But there is a probability that a fluctuation happens so the whole gas is again confined in a part of the box: entropy decreases again. It is a mechanical consideration of course. Poincarè said that the time to see such a fluctuations is longer than the estimated time of the universe so is physically meaningless. I don't report the demonstration of poincarè but before you should see also liouville theorem. - Thank you. I will google Poincarè. – dotancohen Feb 19 '12 at 15:33 The way the second "law" actually works is like this: any given system has a multitude of states it can occupy. It is assumed all of these states are equally likely. In that case, states which exist in a multitude of permutations (but are physically equivalent) are more likely to be observed, i.e, you are much more likely to see BAAA=ABAA=AABA=AAAB than to see AAAA, assuming all these are possible. There is not actually any kind of mechanistic rule attached. Heat "flows" from a hot to a cool system simply in that there are more units of energy in the one, so in the random transfer processes, it is more likely to observe a unit of energy transferring from the hot system. In practice, the numbers which represent the number of possible states near equilibrium are so large that they are hard even to compute with (like 10^100 factorial) so these vastly dominate the probability of events. But apart from commenting on where the dice are most likely to land, there is no rule in quantum thermodynamics which says heat has to transfer from the hot system to the cold system. It can just as well do the reverse, and this will become more apparent as you examine smaller scales. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451367855072021, "perplexity_flag": "middle"}
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_9&diff=31624&oldid=31612
# User:Michiexile/MATH198/Lecture 9 ### From HaskellWiki (Difference between revisions) | | | | | |---------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Current revision (18:40, 17 November 2009) (edit) (undo) | | | (4 intermediate revisions not shown.) | | | | | Line 1: | | Line 1: | | | - | IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ. | | | | - | | | | | - | | | | | | ===Recursion patterns=== | | ===Recursion patterns=== | | | | | | | Line 33: | | Line 30: | | | | mu f = x where x = f x | | mu f = x where x = f x | | | </haskell> | | </haskell> | | - | * MFP write <math>(f\Delta g)</math> for | | | | - | <haskell> | | | | - | Delta f g = \x -> (f x, g x) | | | | - | </haskell> | | | | - | * MFP write <math>(f\nabla g) x</math> for | | | | - | <haskell> | | | | - | (Nabla f g) (Left x) = f x | | | | - | (Nabla f g) (Right x) = g x | | | | - | </haskell> | | | | - | | | | | - | These two last constructions are directly motivated by the maps induced from the universal properties of products and coproducts. | | | | - | | | | | - | We shall write <math>(f\times g)</math> and <math>(f+g)</math> for the <math>\Delta</math> and <math>\nabla</math> constructions, respectively. | | | | | | | | | | We note that in the situation considered by MFP, inital algebras and final coalgebras coincide, and thus <math>in_A, out_A</math> are the pair of isomorphic maps induced by either the initial algebra- or the final coalgebra-structure. | | We note that in the situation considered by MFP, inital algebras and final coalgebras coincide, and thus <math>in_A, out_A</math> are the pair of isomorphic maps induced by either the initial algebra- or the final coalgebra-structure. | | Line 119: | | Line 103: | | | | MFP define the anamorphism by a fixpoint as well, namely: | | MFP define the anamorphism by a fixpoint as well, namely: | | | <haskell> | | <haskell> | | | | + | ana :: (b -> F a b) -> b -> T a | | | ana psi = mu (\x -> inT . fmap x . psi) | | ana psi = mu (\x -> inT . fmap x . psi) | | | </haskell> | | </haskell> | | Line 144: | | Line 129: | | | | MFP define it, again, as a fix point: | | MFP define it, again, as a fix point: | | | <haskell> | | <haskell> | | | | + | hylo :: (F a b2 -> b2) -> (b1 -> F a b1) -> b1 -> b2 | | | hylo phi psi = mu (\x -> phi . fmap x . psi) | | hylo phi psi = mu (\x -> phi . fmap x . psi) | | | </haskell> | | </haskell> | | Line 162: | | Line 148: | | | | factorial = hylo phi psi | | factorial = hylo phi psi | | | </haskell> | | </haskell> | | - | | | | | | | | | | | ====Metamorphisms==== | | ====Metamorphisms==== | | | | + | | | | | + | The ''metamorphism'' is the ''other'' composition of an anamorphism with a catamorphism. It takes some structure, deconstructs it, and then reconstructs a new structure from it. | | | | + | | | | | + | As a recursion pattern, it's kinda boring - it'll take an interesting structure, deconstruct it into a ''scalar'' value, and then reconstruct some structure from that scalar. As such, it won't even capture the richness of <math>hom(F x, G y)</math>, since any morphism expressed as a metamorphism will factor through a map <math>x\to y</math>. | | | | | | | | ====Paramorphisms==== | | ====Paramorphisms==== | | | | + | | | | | + | ''Paramorphisms'' were discussed in the MFP paper as a way to extend the catamorphisms so that the operating function can access its arguments in computation as well as in recursion. We gave the factorial above as a hylomorphism instead of a catamorphism precisely because no simple enough catamorphic structure exists. | | | | | | | | ====Apomorphisms==== | | ====Apomorphisms==== | | | | + | | | | | + | The ''apomorphism'' is the dual of the paramorphism - it does with retention of values along the way what anamorphisms do compared to catamorphisms. | | | | | | | | ===Further reading=== | | ===Further reading=== | | | | | | | - | Terminology in the literature: in and out, inl, inr. | + | * Erik Meijer, Maarten Fokkinga, Ross Paterson: ''Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire'' [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.41.125&rep=rep1&type=pdf] | | | | + | * L. Augusteijn: ''Sorting morphisms'' [http://www.springerlink.com/index/w543447353067h67.pdf] | | | | + | | | | | + | ===Further properties of adjunctions=== | | | | + | | | | | + | ====RAPL==== | | | | + | | | | | + | '''Proposition''' If <math>F</math> is a ''right adjoint'', thus if <math>F</math> has a left adjoint, then <math>F</math> preserves limits in the sense that <math>F(\lim_{\leftarrow} A_i) = \lim_{\leftarrow} F(A_i)</math>. | | | | + | | | | | + | Example: <math>(\lim_{\leftarrow_i} A_i)\times X = \lim_{\leftarrow_i} A_i\times X</math>. | | | | + | | | | | + | We can use this to prove that things cannot be adjoints - since all right adjoints preserve limits, if a functor <math>G</math> doesn't preserve limits, then it doesn't have a left adjoint. | | | | + | | | | | + | Similarly, and dually, left adjoints preserve colimits. Thus if a functor doesn't preserve colimits, it cannot be a left adjoint, thus cannot have a right adjoint. | | | | + | | | | | + | The proof of these statements build on the ''Yoneda lemma'': | | | | + | | | | | + | '''Lemma''' If <math>C</math> is a locally small category (i.e. all hom-sets are sets). Then for any <math>c\in C_0</math> and any functor <math>F: C^{op}\to Sets</math> there is an isomorphism | | | | + | :<math>hom_{hom_{Sets^{C^{op}}}}(yC, F) = FC</math> | | | | + | where we define <math>yC = d\mapsto hom_C(d,c) : C^{op}\to Sets</math>. | | | | + | | | | | + | The Yoneda lemma has one important corollary: | | | | + | | | | | + | '''Corollary''' If <math>yA = yB</math> then <math>A = B</math>. | | | | + | | | | | + | Which, in turn has a number of important corollaries: | | | | + | | | | | + | '''Corollary''' <math>(A^B)^C = A^{B\times C}</math> | | | | + | | | | | + | '''Corollary''' Adjoints are unique up to isomorphism - in particular, if <math>F: C\to D</math> is a functor with right adjoints <math>U, V: D\to C</math>, then <math>U = V</math>. | | | | + | | | | | + | '''Proof''' <math>hom_C(C, UD) = hom_D(FC, D) = hom_C(C, VD)</math>, and thus by the corollary to the Yoneda lemma, <math>UD = VD</math>, natural in <math>D</math>. | | | | + | | | | | + | ====Functors that are adjoints==== | | | | | | | | | + | * The functor <math>X\mapsto X\times A</math> has right adjoint <math>Y\mapsto Y^A</math>. The universal mapping property of the exponentials follows from the adjointness property. | | | | + | * The functor <math>\Delta: C\to C\times C, c\mapsto (c,c)</math> has a left adjoint given by the coproduct <math>(X,Y)\mapsto X + Y</math> and right adjoint the product <math>(X,Y)\mapsto X\times Y</math>. | | | | + | * More generally, the functor <math>C\to C^J</math> that takes <math>c</math> to the constant functor <math>const_c(j) = c, const_c(f) = 1_c</math> has left andright adjoints given by colimits and limits: | | | | + | :<math>\lim_\rightarrow -| \Delta -| \lim_\leftarrow</math> | | | | + | * Pointed rings are pairs <math>(R, r\in R)</math> of rings and one element singled out for attention. Homomorphisms of pointed rings need to take the distinguished point to the distinguished point. There is an obvious forgetful functor <math>U: Rings_* \to Rings</math>, and this has a left adjoint - a free ring functor that adjoins a new indeterminate <math>R\mapsto (R[x], x)</math>. This gives a formal definition of what we mean by ''formal polynomial expressions'' et.c. | | | | + | * Given sets <math>A, B</math>, we can consider the powersets <math>P(A), P(B)</math> containing, as elements, all subsets of <math>A, B</math> respectively. Suppose <math>f:A\to B</math> is a function, then <math>f^{-1}: P(B)\to P(A)</math> takes subsets of <math>B</math> to subsets of <math>A</math>. | | | | + | :Viewing <math>P(A)</math> and <math>P(B)</math> as partially ordered sets by the inclusion operations, and then as categories induced by the partial order, <math>f^{-1}</math> turns into a functor between partial orders. And it turns out <math>f^{-1}</math> has a left adjoint given by the operation <math>im(f)</math> taking a subset to the set of images under the function <math>f</math>. And it has a right adjoint <math>f_*(U) = \{b\in B: f^{-1}(b)\subseteq U\}</math> | | | | + | * We can introduce a categorical structure to logic. We let <math>L</math> be a formal language, say of predicate logic. Then for any list <math>x = x_1, x_2, ..., x_n</math> of variables, we have a preorder <math>Form(x)</math> of formulas with no free variables not occuring in <math>x</math>. The preorder on <math>Form(x)</math> comes from the ''entailment'' operation - <math> f |- g</math> if in every interpretation of the language, <math>f \Rightarrow g</math>. | | | | + | : We can build an operation on these preorders - a functor on the underlying categories - by adjoining a single new variable: <math>*: Form(x) \to Form(x, y)</math>, sending each form to itself. Obviously, if <math>f |- g</math> with <math>x</math> the source of free variables, if we introduce a new allowable free variable, but don't actually change the formulas, the entailment stays the same. | | | | + | : It turns out that there is a right adjoint to <math>*</math> given by <math>f\mapsto \forall y. f</math>. And a left adjoint to <math>*</math> given by <math>f\mapsto \exists y. f</math>. Adjointness properties give us classical deduction rules from logic. | | | | | | | | ===Homework=== | | ===Homework=== | | Line 179: | | Line 215: | | | | # Write a fold for the data type <hask>data T a = L a | B a a | C a a a</hask> and demonstrate how this can be written as a catamorphism by giving the algebra it maps to. | | # Write a fold for the data type <hask>data T a = L a | B a a | C a a a</hask> and demonstrate how this can be written as a catamorphism by giving the algebra it maps to. | | | # Write the fibonacci function as a hylomorphism. | | # Write the fibonacci function as a hylomorphism. | | | | + | # Write the Towers of Hanoi as a hylomorphism. You'll probably want to use binary trees as the intermediate data structure. | | | | + | # Write a prime numbers generator as an anamorphism. | | | # * The integers have a partial order induced by the divisibility relation. We can thus take any integer and arrange all its divisors in a tree by having an edge <math>n \to d</math> if <math>d|n</math> and <math>d</math> doesn't divide any other divisor of <math>n</math>. Write an anamorphic function that will generate this tree for a given starting integer. Demonstrate how this function is an anamorphism by giving the algebra it maps from. | | # * The integers have a partial order induced by the divisibility relation. We can thus take any integer and arrange all its divisors in a tree by having an edge <math>n \to d</math> if <math>d|n</math> and <math>d</math> doesn't divide any other divisor of <math>n</math>. Write an anamorphic function that will generate this tree for a given starting integer. Demonstrate how this function is an anamorphism by giving the algebra it maps from. | | | :''Hint'': You will be helped by having a function to generate a list of all primes. One suggestion is: | | :''Hint'': You will be helped by having a function to generate a list of all primes. One suggestion is: | ## Contents ### 1 Recursion patterns Meijer, Fokkinga & Patterson identified in the paper Functional programming with bananas, lenses, envelopes and barbed wire a number of generic patterns for recursive programming that they had observed, catalogued and systematized. The aim of that paper is to establish a number of rules for modifying and rewriting expressions involving these generic recursion patterns. As it turns out, these patterns are instances of the same phenomenon we saw last lecture: where the recursion comes from specifying a different algebra, and then take a uniquely existing morphism induced by initiality (or, as we shall see, finality). Before we go through the recursion patterns, we need to establish a few pieces of theoretical language, dualizing the Eilenberg-Moore algebra constructions from the last lecture. #### 1.1 Coalgebras for endofunctors Definition If $P: C\to C$ is an endofunctor, then a P-coalgebra on A is a morphism $a: A\to PA$. A morphism of coalgebras: $f: a\to b$ is some $f: A\to B$ such that the diagram commutes. Just as with algebras, we get a category of coalgebras. And the interesting objects here are the final coalgebras. Just as with algebras, we have Lemma (Lambek) If $a: A\to PA$ is a final coalgebra, it is an isomorphism. Finally, one thing that makes us care highly about these entities: in an appropriate category (such as ω − CPO), initial algebras and final coalgebras coincide, with the correspondence given by inverting the algebra/coalgebra morphism. In Haskell not quite true (specifically, the final coalgebra for the lists functor gives us streams...). Onwards to recursion schemes! We shall define a few specific morphisms we'll use repeatedly. This notation, introduced here, occurs all over the place in these corners of the literature, and are good to be aware of in general: • If $a: TA\to A$ is an initial algebra for T, we denote a = inA. • If $a: A\to TA$ is a final coalgebra for T, we denote a = outA. • We write μf for the fixed point operator `mu f = x where x = f x` We note that in the situation considered by MFP, inital algebras and final coalgebras coincide, and thus inA,outA are the pair of isomorphic maps induced by either the initial algebra- or the final coalgebra-structure. #### 1.2 Catamorphisms A catamorphism is the uniquely existing morphism from an initial algebra to a different algebra. We have to define maps down to the return value type for each of the constructors of the complex data type we're recursing over, and the catamorphism will deconstruct the structure (trees, lists, ...) and do a generalized fold over the structure at hand before returning the final value. The intuition is that for catamorphisms we start essentially structured, and dismantle the structure. Example: the length function from last lecture. This is the catamorphism for the functor $P_A(X) = 1 + A\times X$ given by the maps ```u :: Int u = 0   m :: (A, Int) -> Int m (a, n) = n+1``` MFP define the catamorphism by, supposing T is initial for the functor F : ```cata :: (F a b -> b) -> T a -> b cata phi = mu (\x -> phi . fmap x . outT)``` We can reframe the example above as a catamorphism by observing that here, ```data F a b = Nil | Cons a b deriving (Eq, Show) type T a = [a]   instance Functor (F a) where fmap _ Nil = Nil fmap f (Cons n a) = Cons n (f a)   outT :: T a -> F a (T a) outT [] = Nil outT (a:as) = Cons a as   lphi :: F a Int -> Int lphi Nil = 0 lphi (Cons a n) = n + 1   l = cata lphi``` where we observe that mu has a global definition for everything we do and out is defined once we settle on the functor F and its initial algebra. Thus, the definition of phi really is the only place that the recursion data shows up. #### 1.3 Anamorphisms An anamorphism is the categorical dual to the catamorphism. It is the canonical morphism from a coalgebra to the final coalgebra for that endofunctor. Here, we start unstructured, and erect a structure, induced by the coalgebra structures involved. Example: we can write a recursive function ```first :: Int -> [Int] first 1 = [1] first n = n : first (n - 1)``` This is an anamorphism from the coalgebra for $P_{\mathbb N}(X) = 1 + \mathbb N\times X$ on $\mathbb N$ generated by the two maps ```c 0 = Left () c n = Right (n, n-1)``` and we observe that we can chase through the diagram to conclude that therefore ```f 0 = [] f n = n : f (n - 1)``` which is exactly the recursion we wrote to begin with. MFP define the anamorphism by a fixpoint as well, namely: ```ana :: (b -> F a b) -> b -> T a ana psi = mu (\x -> inT . fmap x . psi)``` We can, again, recast our illustration above into a structural anamorphism, by: ```-- Reuse mu, F, T from above inT :: F a (T a) -> T a inT Nil = [] inT (Cons a as) = a:as   fpsi :: Int -> F Int Int fpsi 0 = Nil fpsi n = Cons n (n-1)``` Again, we can note that the implementation of fpsi here is exactly the c above, and the resulting function will - as we can verify by compiling and running - give us the same kind of reversed list of the n first integers as the first function above would. #### 1.4 Hylomorphisms The hylomorphisms capture one of the two possible compositions of anamorphisms and catamorphisms. Parametrized over an algebra $\phi: T A\to A$ and a coalgebra $\psi: B \to T B$ the hylomorphism is a recursion pattern that computes a value in A from a value in A by generating some sort of intermediate structure and then collapsing it again. It is, thus the composition of the uniquely existing morphism from a coalgebra to the final coalgebra for an endofunctor, followed by the uniquely existing morphism from the initial algebra to some other algebra. MFP define it, again, as a fix point: ```hylo :: (F a b2 -> b2) -> (b1 -> F a b1) -> b1 -> b2 hylo phi psi = mu (\x -> phi . fmap x . psi)``` First off, we can observe that by picking one or the other of inA,outA as a parameter, we can recover both the anamorphisms and the catamorphisms as hylomorphisms. As an example, we'll compute the factorial function using a hylomorphism: ```phi :: F Int Int -> Int phi Nil = 1 phi (Cons n m) = n*m   psi :: Int -> F Int Int psi 0 = Int psi n = Cons n (n-1)   factorial = hylo phi psi``` #### 1.5 Metamorphisms The metamorphism is the other composition of an anamorphism with a catamorphism. It takes some structure, deconstructs it, and then reconstructs a new structure from it. As a recursion pattern, it's kinda boring - it'll take an interesting structure, deconstruct it into a scalar value, and then reconstruct some structure from that scalar. As such, it won't even capture the richness of hom(Fx,Gy), since any morphism expressed as a metamorphism will factor through a map $x\to y$. #### 1.6 Paramorphisms Paramorphisms were discussed in the MFP paper as a way to extend the catamorphisms so that the operating function can access its arguments in computation as well as in recursion. We gave the factorial above as a hylomorphism instead of a catamorphism precisely because no simple enough catamorphic structure exists. #### 1.7 Apomorphisms The apomorphism is the dual of the paramorphism - it does with retention of values along the way what anamorphisms do compared to catamorphisms. ### 2 Further reading • Erik Meijer, Maarten Fokkinga, Ross Paterson: Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire [1] • L. Augusteijn: Sorting morphisms [2] ### 3 Further properties of adjunctions #### 3.1 RAPL Proposition If F is a right adjoint, thus if F has a left adjoint, then F preserves limits in the sense that $F(\lim_{\leftarrow} A_i) = \lim_{\leftarrow} F(A_i)$. Example: $(\lim_{\leftarrow_i} A_i)\times X = \lim_{\leftarrow_i} A_i\times X$. We can use this to prove that things cannot be adjoints - since all right adjoints preserve limits, if a functor G doesn't preserve limits, then it doesn't have a left adjoint. Similarly, and dually, left adjoints preserve colimits. Thus if a functor doesn't preserve colimits, it cannot be a left adjoint, thus cannot have a right adjoint. The proof of these statements build on the Yoneda lemma: Lemma If C is a locally small category (i.e. all hom-sets are sets). Then for any $c\in C_0$ and any functor $F: C^{op}\to Sets$ there is an isomorphism $hom_{hom_{Sets^{C^{op}}}}(yC, F) = FC$ where we define $yC = d\mapsto hom_C(d,c) : C^{op}\to Sets$. The Yoneda lemma has one important corollary: Corollary If yA = yB then A = B. Which, in turn has a number of important corollaries: Corollary $(A^B)^C = A^{B\times C}$ Corollary Adjoints are unique up to isomorphism - in particular, if $F: C\to D$ is a functor with right adjoints $U, V: D\to C$, then U = V. Proof homC(C,UD) = homD(FC,D) = homC(C,VD), and thus by the corollary to the Yoneda lemma, UD = VD, natural in D. #### 3.2 Functors that are adjoints • The functor $X\mapsto X\times A$ has right adjoint $Y\mapsto Y^A$. The universal mapping property of the exponentials follows from the adjointness property. • The functor $\Delta: C\to C\times C, c\mapsto (c,c)$ has a left adjoint given by the coproduct $(X,Y)\mapsto X + Y$ and right adjoint the product $(X,Y)\mapsto X\times Y$. • More generally, the functor $C\to C^J$ that takes c to the constant functor constc(j) = c,constc(f) = 1c has left andright adjoints given by colimits and limits: $\lim_\rightarrow -| \Delta -| \lim_\leftarrow$ • Pointed rings are pairs $(R, r\in R)$ of rings and one element singled out for attention. Homomorphisms of pointed rings need to take the distinguished point to the distinguished point. There is an obvious forgetful functor $U: Rings_* \to Rings$, and this has a left adjoint - a free ring functor that adjoins a new indeterminate $R\mapsto (R[x], x)$. This gives a formal definition of what we mean by formal polynomial expressions et.c. • Given sets A,B, we can consider the powersets P(A),P(B) containing, as elements, all subsets of A,B respectively. Suppose $f:A\to B$ is a function, then $f^{-1}: P(B)\to P(A)$ takes subsets of B to subsets of A. Viewing P(A) and P(B) as partially ordered sets by the inclusion operations, and then as categories induced by the partial order, f − 1 turns into a functor between partial orders. And it turns out f − 1 has a left adjoint given by the operation im(f) taking a subset to the set of images under the function f. And it has a right adjoint $f_*(U) = \{b\in B: f^{-1}(b)\subseteq U\}$ • We can introduce a categorical structure to logic. We let L be a formal language, say of predicate logic. Then for any list x = x1,x2,...,xn of variables, we have a preorder Form(x) of formulas with no free variables not occuring in x. The preorder on Form(x) comes from the entailment operation - f | − g if in every interpretation of the language, $f \Rightarrow g$. We can build an operation on these preorders - a functor on the underlying categories - by adjoining a single new variable: $*: Form(x) \to Form(x, y)$, sending each form to itself. Obviously, if f | − g with x the source of free variables, if we introduce a new allowable free variable, but don't actually change the formulas, the entailment stays the same. It turns out that there is a right adjoint to * given by $f\mapsto \forall y. f$. And a left adjoint to * given by $f\mapsto \exists y. f$. Adjointness properties give us classical deduction rules from logic. ### 4 Homework 1. Write a fold for the data type data T a = L a | B a a | C a a a and demonstrate how this can be written as a catamorphism by giving the algebra it maps to. 2. Write the fibonacci function as a hylomorphism. 3. Write the Towers of Hanoi as a hylomorphism. You'll probably want to use binary trees as the intermediate data structure. 4. Write a prime numbers generator as an anamorphism. 5. * The integers have a partial order induced by the divisibility relation. We can thus take any integer and arrange all its divisors in a tree by having an edge $n \to d$ if d | n and d doesn't divide any other divisor of n. Write an anamorphic function that will generate this tree for a given starting integer. Demonstrate how this function is an anamorphism by giving the algebra it maps from. Hint: You will be helped by having a function to generate a list of all primes. One suggestion is: ```primes :: [Integer] primes = sieve [2..] where sieve (p:xs) = p : sieve [x|x <- xs, x `mod` p > 0]``` Hint: A good data structure to use is; with expected output of running the algorithm: ```data Tree = Leaf Integer | Node Integer [Tree]   divisionTree 60 = Node 60 [ Node 30 [ Node 15 [ Leaf 5, Leaf 3], Node 10 [ Leaf 5, Leaf 2], Node 6 [ Leaf 3, Leaf 2]], Node 20 [ Node 10 [ Leaf 5, Leaf 2], Node 4 [ Leaf 2]], Node 12 [ Node 6 [ Leaf 3, Leaf 2], Node 4 [ Leaf 2]]]```
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8451125025749207, "perplexity_flag": "middle"}
http://www.cfd-online.com/W/index.php?title=Heat_transfer&diff=7497&oldid=7496
[Sponsors] Home > Wiki > Heat transfer # Heat transfer ### From CFD-Wiki (Difference between revisions) | | | | | |----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | () | | () | | | Line 27: | | Line 27: | | | | (-ve sign indicates temperature reduction in heat flow direction) | | (-ve sign indicates temperature reduction in heat flow direction) | | | | | | | - | Conduction heat transfer depends on thermal conductivity of material. Heat carried away by a solid by condution is proportional to its thermal conductivity. | + | Conduction heat transfer depends on thermal conductivity of material. Heat carried away by a solid by condution is proportional to its thermal conductivity for a unit length and unit cross-section. | | | | + | Here are few materials with their conductivity: | | | | + | Diamond:  2000 W/m-K | | | | + | Silver: 406 W/m-K | | | | + | Copper: 385 W/m-K | | | | + | Aluminum: 205 W/m-K | | | | + | Sn-Pb Solder: 50 W/m-K | | | | + | FR-4: 0.3 W/m-K | | | | + | Kapton: 0.2 W/m-K | | | | + | .... | | | | | | | | == Convection == | | == Convection == | ## Revision as of 20:49, 24 May 2007 Everyone has always understood that something flows from hot object to cold one. It is called heat. The overall driving force for this heat flow is thermal gradient. There are basically three modes of heat transfer: 1) Conduction 2) Convection 3) Radiation ## Conduction Conduction is heat transfer by means of molecular agitation within a material without any motion of the material as a whole. In this mode of heat transfer, heat flow is due to molecular collisions in the substance. Since molecullar collisions increase with temperature (Temperature increases the kinetic energy of the molecules, which is a combined effect of rotational, translational and vibrational motion of molecules ), heat transfer due to conduction increases. Conduction heat transfer through a substance is because of a temperature gradient. The rate of heat transfer by conduction between two regions of a substance is proportional to the temperature difference between them. The constant of propotionality is called thermal conductivity of the material. The thermal conductivity of materials in general depends on Temperature. In general liquids and gases have lower thermal conductivity as compared to solids (esp metals). Mathematically, it can be described by using the Fourier's law: $Q_{Conduction} = -k*A*\frac{dT}{dx}$ Where $Q = \mbox{the rate of heat conduction (W)}$ $k = \mbox{Thermal conductivity of the material (W/m K)}$ $A = \mbox{Cross-sectional area of the object perpendicular to heat conduction (m2)}$ $T = \mbox{Temperature (K)}$ $x = \mbox{Length of the object (m)}$ Conduction heat transfer depends on thermal conductivity of material. Heat carried away by a solid by condution is proportional to its thermal conductivity for a unit length and unit cross-section. Here are few materials with their conductivity: Diamond: 2000 W/m-K Silver: 406 W/m-K Copper: 385 W/m-K Aluminum: 205 W/m-K Sn-Pb Solder: 50 W/m-K FR-4: 0.3 W/m-K Kapton: 0.2 W/m-K .... ## Convection Convection is heat transfer by means of motion of the molecules in the fluid. Heat energy transfers between a solid and a fluid when there is a temperature difference between the fluid and the solid. Convection heat transfer can not be neglected when there is a significant fluid motion around the solid. There are mainly two types of the convection heat transfer: 2) Forced Convection • Natural or Free Convection:- The temperature of the solid due to an external field such as fluid buoyancy can induce a fluid motion. This is known as "natural convection" and it is a strong function of the temperature difference between the solid and the fluid. This type of convective heat transfer takes place due to only fluid buoyancy caused due to temperature defference between fluid layers. • Forced Convection:- Forcing air to blow over the solid by using external devices such as fans and pumps can also generate a fluid motion. This is known as "forced convection". Some external means for fluid motion is necessary in this type of convective heat transfer. Fluid mechanics plays a major role in determining convection heat transfer. For each kind of convection heat transfer, the fluid flow can be either laminar or turbulent. For laminar flow of fluid over solid surface, a steady boundary layer formation takes place through which conductive heat transfer occur. This reduces convective heat transfer rate. Turbulent flow forms when the boundary layer is shedding or breaking due to higher velocities or rough geometries. This enhances the heat transfer. Newton's Law of Cooling Heat transfer due to convection is described by Newton's Law of Cooling, $Q_{Convection} = h*A*dT$ Where $Q_{Convection} = \mbox{Heat convected to surrounding fluid (W)}$ $h = \mbox{Convection heat-transfer coefficient (constant of proportionality ) (W/m2 K)}$ $A = \mbox{Area of the solid in contact with fluid (m2)}$ $dT = \mbox{Temperature difference between solid and surrounding fluid (Ts-Tf) (K)}$ The rate of heat transfered to the surrounding fluid is proportional to the object's exposed area A, and the difference between the solid temperature Ts and the mean fluid temperature Tf. Velocity of the fluid over solid is also a major contributing to enhance the rate of heat transfer. Convection heat-transfer coefficient (h) plays main role in heat transfer by convection. Heat transfer rates by convection are expressed in terms of h. It depends on following factors (h is directly propotional to these factors): 1) Exposed area of solid 2) Temperature difference between solid and fluid 3) Fluid velocity Convective heat transfer can also be characterised in terms of Nusselt number. Nusselt number is dimensionless number which is useful for heat transfer calculations. Nusselt number is the dimensionless heat transfer coefficient and appears when you are dealing with convection. It, therefore, provides a measure of the convection heat transfer at the surface. It can be defined as follows: $Nu = \frac{h*l}{k}$ Where $Nu = \mbox{Nusselt number}$ $h = \mbox{Convection heat-transfer coefficient (constant of proportionality ) (W/m2 K)}$ $l = \mbox{Characteristic dimension of th esolid object (m)}$ $k = \mbox{Thermal conductivity of the solid (W/m k)}$ The Nusselt number may be viewed as the ratio of heat flow by convection to conduction for a layer of fluid. If Nu=1, we have pure conduction. Higher values of Nusselt mean that the heat transfer is enhanced by convection.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146854281425476, "perplexity_flag": "middle"}