url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/8495/finding-divisors-on-a-curve/8504
## Finding divisors on a curve ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is the best way to find an actual divisor of an affine curve? I.E. if I am interested in finding a canonical divisor of a curve in two variables, is there a general way to go about it? Do I need to consider a projection on the x-axis? I should clarify. I'm assuming the field is characteristic 0, and the curve is affine of the form f(x,y)=0. I computed the closure in P2, it was smooth, and am now trying to compute a canonical divisor on this curve. Thanks for the comments am reading up on it now. - ## 4 Answers As others have pointed out, there are several different ways to explicitly write down a divisor. So it would be helpful to know what kind of answer you're looking for. Anyhow, here's one answer. For a curve, the canonical divisor is the same as the sheaf of differential 1-forms. Let's assume that your curve $C$ in the affine plane is cut out by the equation $f(x,y)=0$. Theorem II 8.17 in Hartshorne's Algebraic Geometry book yields an exact sequence for computing $\Omega^1_{C/k}$. Namely, we have an exact sequence: $I/I^2 \to \Omega^1_{\mathbb A^2_k}\otimes \mathcal O_C\to \Omega^1_{C/k}\to 0$ where $I$ is the defining ideal of the curve $C$. Since $I$ is generated by $f$, and since $\Omega^1_{\mathbb A^2_k}\otimes \mathcal O_C$ is the free $\mathcal O_C$ module with generators $dx, dy$, it follows that $\Omega^1_{C/k}$ is generated by $dx$ and $dy$ modulo the relation $df$. This provides an explicit presentation for the canonical divisor as a module. When $C$ is smooth, this will be a locally free rank 1 $\mathcal O_C$-module. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Hey Elijah, the answer to your question is quite simple, elementary and explicit! You don't need to read up on anything fancy. Here it goes: Fact: The canonical divisor of a smooth affine hypersurface is zero! In particular the canonical divisor of your curve $f(x,y)$ is $0$ since as you have mentioned the curve has a smooth projectivisation (so the curve itself must be smooth). Proof: Let $X\subset\mathbb{A}^2$ be the affine curve defined by $f(x,y)$ which we are assuming to be smooth. Define the open sets $U_1,U_2$ in the plane by $\frac{df}{dx}\neq{0}$ and $\frac{df}{dy}\neq{0}$. Then $y$ and $x$ are local parameters in $U_1$ and $U_2$ respectively and the forms $dy$ and $dx$ are the basis of $\Omega^{1}[U_1]$ over $k[U_1]$ (respectively $\Omega^{1}[U_2]$ over $k[U_2]$). However, let us choose more convenient basis like $\omega_1=-\frac{dy}{df/dx}$ and $\omega_2=\frac{dx}{df/dy}$ on $U_1$ and $U_2$ respectively. This is permissible since the denominators don't vanish on the respective open sets. Now note that on $U_1\cap{U_2}$ both the forms are equal since $\frac{df}{dx}dx+\frac{df}{dy}dy=0$, therefore they patch to give a form $\omega$ that is regular and everywhere nonzero on $U$, so that $div\ \omega=0$ in $U$. In other words the canonical divisor is zero. Note: This works analogously for any smooth affine hypersurface. P.S.: Quoting an exact sequence is not a substitute for making even one small and simple calculation. Hope this motivates you for more algebraic geometry! - I am confused, take the line $y = 0$ ( x-axis) in $\mathbb{A}^{2}$. The differential form $x\;dx$ will have a divisor $[0]$? – isildur Jun 20 2011 at 5:33 Do you mean actual divisor corresponding to some rational function? Then look at the zeros and poles (ie. factor the numerator and denominator, this part might need some commutative algebra in general). Otherwise, as Kevin said divisors are just formal sums of points. They are book-keeping devices that are boring without object that they book-keep for (if this made sense). I suggest you look at projective curves, where all the nice properties are more obvious (for compactness reasons). For canonical divisors explicitly factor a differential (again the class is canonical, there are many divisors depending on the choice of a differential). The only difference is that now you need the differential part (dx) and it will change your function as you go from chart to chart. Otherwise computing its divisor is basically the same. This is the explicit approach taken in Miranda's Algebraic Curves and Riemann Surfaces. If you have good understanding (i don't) of manifolds and bundles you can take the tangent bundle approach. - Not sure if I understand your question, but here are some generalities. There is a correspondence between line bundles and divisors (you can find this in any algebraic geometry book). There is a canonical line bundle on any curve -- in the smooth case this is the cotangent bundle, or the sheaf of differentials, or whatever you prefer to call it. In the singular case there is still the dualizing sheaf. Then you get a canonical divisor, namely the one which corresponds to this canonical line bundle. It's easy to find "an actual divisor" of a curve. Divisors on a curve are just formal combinations of points. So find some points of your curve, and voila. Edit: Let me now elucidate a bit. Dan's answer provides a way to identify the canonical line bundle in your situation of interest. The next step is to identify the corresponding divisor (or more precisely, as Rado points out, the class of the divisor). You can do this by writing down any nonzero (meromorphic) differential form, i.e. any nonzero (meromorphic) section of the canonical line bundle, and finding its zeroes and poles (counting multiplicities); then the canonical divisor will be the formal sum of the zeroes minus the formal sum of the poles (counting multiplicities). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9341444373130798, "perplexity_flag": "head"}
http://mathoverflow.net/questions/761/undergraduate-level-math-books/4888
## Undergraduate Level Math Books [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What are some good undergraduate level books, particularly good introductions to (Real and Complex) Analysis, Linear Algebra, Algebra or Differential/Integral Equations (but books in any undergraduate level topic would also be much appreciated)? EDIT: More topics (Affine, Euclidian, Hyperbolic, Descriptive & Diferential Geometry, Probability and Statistics, Numerical Mathematics, Distributions and Partial Equations, Topology, Algebraic Topology, Mathematical Logic etc) Please post only one book per answer so that people can easily vote the books up/down and we get a nice sorted list. If possible post a link to the book itself (if it is freely available online) or to its amazon or google books page. - 6 It's no longer possible to add useful answers to this question (as there are too many!) and it's unclear whether this question would be "allowed" by modern standards -- far too broad. As it's been popping back to the front page fairly frequently, we've decided to close it. – Scott Morrison♦ Jul 11 2010 at 13:30 7 See discussion on meta: meta.mathoverflow.net/discussion/499/… (and remember to vote this comment up, so it is visible to others) – Victor Protsak Jul 14 2010 at 10:34 show 11 more comments ## 96 Answers A Field Guide to Algebra by Antoine Chambert Loir. Covers a surprising amount of material considering the short length and minimal prerequisites - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. "TOPOLOGY WITHOUT TEARS" by SIDNEY A. MORRIS is a very nice introduction in topological spaces theory, I think. - 1 Available online: uob-community.ballarat.edu.au/~smorris/… – sdcvvc Nov 9 2009 at 14:09 Subject: FUNCTIONAL ANALYSIS Erwin Kreyszig Introductory Functional Analysis with Applications - E. Hairer, G. Wanner: Analysis by its history for an introduction to real and numerical analysis from a historical point of view. - Geroch, Mathematical Physics Don't be scared by the title: it teaches algebra, topology and measure theory, using category-theoretic language. - "Differential Calculus on Normed Linear Banach Spaces" by Kalyan Mukherjee. The author is a prof at the Indian Statistical Institute (Kolkata) This is an amazing book which introduces differential calculus in arbitrary finite dimensional spaces (thinking of the derivative as the Jacobian) as the next leap from the Apostol and Rudin level and also build in topological ideas. It then goes into manifold theory and shows how to compute tangents to curves inside lie groups. It has nice sections of things like differentiating the determinant function and the matrix multiplication function and inverse function theorem and idea of equivalence of norms. I would strongly recommend this book to an undergrad after he/she has done the Apostol/Rudin level of calculus. "Calculus on Manifolds" by Spivak and "Differential geometry and Lie groups" by Kumaresan are two other good books which can be read alongside it. - I think that Linear Algebra by Friedberg, Insel, and Spence is a spectacular linear algebra book. It gets straight to the point, it provides a worked out example or two exactly when they're needed, and it has lots of interesting exercises. There are way too many gigantic linear algebra books with colour pictures and contrived examples which often seem to obsfucate the concepts being introduced. In my mind this book is 'the mathematician's linear algebra book'; clean and concise. - Visual Complex Analysis by Tristan Needham is awesome! - 4 In fact, there are others that agree with you: the book is already in the list! – Bjorn Poonen Mar 7 2010 at 7:46 I liked Elements of Abstract Algebra by Allan Clark, which is mainly a problem book with a moderate amount of exposition, but the problems are so well-chosen that a diligent undergraduate student working through all of them will come out with a solid knowledge of group theory, classical ring theory, and Galois theory. - Since Numerical Mathematics has not been covered, I would recommend the following Introduction to Numerical Analysis by Stoer et. al. http://www.amazon.com/Introduction-Numerical-Analysis-J-Stoer/dp/038795452X/ref=sr_1_14?ie=UTF8&s=books&qid=1255807973&sr=8-14 - Godement "Analysis" (I,II,III,IV) http://www.amazon.com/Analysis-Convergence-Elementary-functions-Universitext/dp/3540059237 "... The content is quite classical ... [...] The treatment is less classical: precise although unpedantic (rather far from the definition-theorem-corollary-style), it contains many interesting commentaries of epistemological, pedagogical, historical and even political nature. [...] The author gives frequent interesting hints on recent developments of mathematics connected to the concepts which are introduced. The Introduction also contains comments that are very unusual in a book on mathematical analysis, going from pedagogy to critique of the French scientific-military-industrial complex, but the sequence of ideas is introduced in such a way that readers are less surprised than they might be. - Linear Algebra: With Applications Otto Bretscher Used at Carleton College – nice explanations, and quite a few proofs. Presents information primarily by providing examples, definitions/axioms and then proofs. - I had many trials and these are in my opinion the best for an introductory, undergraduate level: ODEs: Holzner: Differential Equations for Dummies PDEs: Farlow: Partial Differential Equations for Scientists and Engineers - Applied Linear Algebra, by B. Noble and J.W. Daniel, http://www.amazon.co.uk/Applied-Linear-Algebra-Ben-Noble/dp/0130412600/ref=sr_1_14?ie=UTF8&s=books&qid=1256927879&sr=1-14 . - G. J. O. Jameson: Topology and Normed Spaces for an introduction to functional analysis from a topological point of view. - lindsey childs a concrete introduction to abstract algebra here maybe. - Strichartz, The Way of Analysis Herstein, Abstract Algebra - Milnor's book "Dynamics in One Complex Variable: Introductory Lectures". An early version is available from his website. Suitable for advanced undergraduates, graduate students, and mathematicians. http://www.math.sunysb.edu/cgi-bin/preprint.pl?ims90-5 - Introduction to Topology: Pure and Applied, by Adams and Franzosa. The figures in the book are beautiful, the problems are good, and the applications are good (and unusual) to see in an undergraduate text. - Most readers here will not be able to appreciate them for a simple reason, but my favorite beginners' Analysis text is that by Bröcker. No-nonsense, concise, with a slight orientation towards topology. - Hugo Steinhaus http://en.wikipedia.org/wiki/Hugo_Steinhaus book "Mathematical Kaleidoscope". It is kind of mathematical trivia sometimes very deep;-) It is not for learning math but for learning how to learn math in fun way. - Complex Variables: Harmonic and Analytic Functions by Francis J. Flanigan A nice little Dover paperback which turns the standard course on complex variables on its head. It begins by doing some multivariable calculus in the plane and harmonic functions, then proceeds to talk about complex numbers and to build analytic functions. - I found «A (terse) introduction to linear algebra» (Katznelson) to be a much better book than Axler. It's part of the Student Mathematical Library and published by the AMS. - For (applied) ODEs: Nonlinear dynamics and chaos by Steven Strogatz. A very inspiring book! The explanations are crystal clear with lots of pictures. And it's funny too – the "Romeo and Juliet" illustration of 2-dimensional linear systems (Section 5.3) is a classic. - Siegfried Bosch, Lineare Algebra It's a very elegant, concise but beautifully written approach to Linear Algebra, and I love it. Unfortunately for people who don't speak German, it has never been translated. - I plan to post a complete reading list for undergraduates and graduate students at my blog this summer with my commentaries,but here's one I think that's available online and doesn't get nearly enough credit despite the fame of it's author: Gilbert Strang's Calculus. I wouldn't use anything else for a regular,non-honors calculus course. Carefully written,beautifully motivated with TONS of creative and SIGNIFICANT applications. I hope one day Strang finds the time to write a second edition-I have a list of improvements to suggest. - A Concrete Approach to Abstract Algebra by W. W. Sawyer (\$6 on Amazon!) Though it goes a bit slow at times, it is by far the simplest, most intuitive book on Abstract Algebra in existence. Written for the non-mathematician, it does a great job of teaching the subject in simple, easy-to-understand prose. I couldn't put it down! There are also two chapters on linear algebra, leading up to the final chapters, "vectors over fields" and "fields regarded as vector-spaces". - Lebesgue Integration on Euclidean Space / Frank Jones - an extremely readable book on Lebesgue theory on $\mathbb{R}^n$ (lots of figures and geometric intuition). He constructs Lebesgue measure in a very down-to-earth manner, much more explicitly than other more abstract constructions (via Caratheodory's extension theorem or Riesz's representation theorem). In my experience, it's best to first study Lebesgue measure on $\mathbb{R}^n$ and only then point out that it's merely one instance of the general theory of measures, which is the way this book is written. It can't compare with the "tougher" books on measure theory (e.g. Big Rudin) since it doesn't discuss the Radon-Nikodym theorem and many other important theorems in measure theory, but then again the book is clearly intended for an undergraduate audience, and as for Lebesgue theory on Euclidean spaces, it provides a pretty complete picture. - I would recommend Walter Rudin's classic text entitled Real and Complex Analysis for the mathematically mature undergraduate student. (Of course, this should really only be read after one has familiarity with most of Rudin's earlier textbook: Principles of Mathematical Analysis.) The beautiful thing about this book is that nearly every example or result stated by Rudin is used in "the big picture". As you progress through the book, you really start to appreciate the magic Rudin has used to weave together analysis in a manner that is rarely done in other textbooks. While Rudin states in his preface that the textbook is intended as a first year graduate course on the subject of analysis, I believe that it is quite possible for a mathematically mature undergraduate to follow it. There is no assumption in the book that the reader has any familiarity with linear algebra, abstract algebra, general topology etc. beyond that which was covered in the first seven chapters of his earlier book, but realistically one would like to have at least mastered the basics of these areas before attempting to delve deeper into the analysis covered in this book. (There is a chapter on Banach algebras, but all the necessary abstract algebra here is developed from scratch.) The textbook is indeed challenging with plenty of exercises, but it is not something with which a student with the correct prerequisites should have tremendous difficulty. Furthermore, if a student can successfully read most of the book, he is well-equipped to go deeper into most branches of modern analysis, and should find the other classic texts in the subject, for instance Royden's Real Analysis, Bartle's The Elements of Integration and Lebesgue Measure etc., very easy to follow. The Amazon page for this book can be found here. - Algebraic Theory of Numbers, by Pierre Samuel Assumes only elementary knowledge of group and ring theory (even less than a complete undergraduate course which covers Galois theory) and develops algebraic number theory, a beautiful subject which puts much elementary number theory into an interesting perspective. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243987202644348, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/104368/rational-solutions-to-x3-y3-z3-3xyz-1/104381
Rational solutions to x^3 + y^3 + z^3 - 3xyz = 1 Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I can show that there infinitely many solutions to this equation. Is it possible that the set of rational solutions is dense? - Dense in what? It can't be dense in $R^3$ because there won't be any solutions with distance less than 1 from (0,0,1000) – David Aug 9 at 17:05 Do you mean are the rational solutions dense in the real solutions? – sobe86 Aug 9 at 17:05 He probably means Zariski dense. – Felipe Voloch Aug 9 at 17:27 2 @Marc: Dense in the surface, I think we got that, but in which topology? – Felipe Voloch Aug 9 at 20:08 19 $x^3 + y^3 + z^3 - 3xyz$ is the determinant of a $3\times 3$ circulant matrix, and thus factors as $(x+y+z) (x + \omega y + \omega^2 z) (x+\omega^2 y + \omega z)$ where $\omega$ is a cube root of unity. If $x + \omega y + \omega^2 z = \alpha \in {\bf Q}(\omega)$ then $x + \omega^2 y + \omega z$ is the complex conjugate $\bar \alpha$, so $x+y+z = 1/(\alpha \bar\alpha)$. Conversely, given any nonzero $\alpha \in {\bf Q}(\omega)$ we solve these linear equations in $x,y,z\in\bf Q$ to obtain the general solution of $x^3 + y^3 + z^3 - 3xyz = 1$. – Noam D. Elkies Aug 9 at 21:37 show 3 more comments 2 Answers I think this surface has a rational parameterization in terms of (a,b), given by: $x = (1 + a + a^2)^2/(9 (3 + a (6 + (-1 + a)^2 a)) b^2) + ((-2 + (-2 + a) a) b)/(1 + a + a^2)$ $y = (1 + a + a^2)^2/(9 (3 + a (6 + (-1 + a)^2 a)) b^2) + (b + 2 a b)/(1 + a + a^2)$ $z = (1 + a + a^2)^2/(9 (3 + a (6 + (-1 + a)^2 a)) b^2) + (b - a^2 b)/(1 + a + a^2)$ For rational (a,b), this should give you a dense set of rational points... Let me explain where this parameterization comes from, so it'll be clear that this indeed shows that rational solutions are dense. Consider the (linear, rational) change of variables: $x=p+r$, $y=q+r$, $z=r−p−q$. The equation then simplifies to: $p^2+pq+q^2=1/(9r)$. Since the quadratic is psd, for real points we need $r>0$. Each slice (for fixed $r$) is just an ellipse. If $r$ is the square of a rational number (dense on $\mathbb{R}$), we can parameterize all the rational solutions for that slice. Taking the union over real slices (for all $r>0$), we're done. - How are we supposed to be parsing an expression of the form $u/v/w$? – Qiaochu Yuan Aug 9 at 21:33 2 @QiaochuYuan: This is $$x={\frac { \left( 1+a+{a}^{2} \right) ^{2}}{ 9 \left( 3+a \left( 6+ \left( -1+a \right) ^{2}a \right) \right) {b}^{2}}}+{\frac { \left( -2+ \left( -2+a \right) a \right) b}{1+a+{a}^{2}}}$$ $$y ={\frac { \left( 1+a+{a}^{2} \right) ^{2}}{ 9 \left( 3+a \left( 6+ \left( -1+a \right) ^{2}a \right) \right) {b}^{2}}}+{\frac {b+2\,ab} {1+a+{a}^{2}}}$$ $$z = {\frac { \left( 1+a+{a}^{2} \right) ^{2}}{9 \left( 3+a \left( 6+ \left( -1+a \right) ^{2}a \right) \right) {b}^{2}}}+{\frac {b-{a}^{2 }b}{1+a+{a}^{2}}}$$ – Robert Israel Aug 9 at 21:48 2 It still doesn't prove density of the rational points in the reals. Any cubic surface over Q with a rational point admits a 6:1 rational map from P^2 to it (in particular, a map with Zariski dense image). However, there are cubic surfaces over Q with a rational point, but with the rational points non-dense in the reals. (See Swinnerton-Dyer, "Two special cubic surfaces".) – René Pannekoek 0 secs ago – René Pannekoek Aug 9 at 23:41 1 Nice example, thanks! But I don't think it applies here. Let me explain where this parameterization comes from. Consider the (linear, rational) change of variables: $x = p+r, y=q+r, z = r-p-q$. The equation then simplifies to: $p^2 + p q+ q^2 = 1/r$. Since the quadratic is psd, for real points we need $r>0$. Each slice (for fixed r) is just an ellipse. If $r$ is the square of a rational number (dense on R), we can parameterize all the rational solutions for that slice. Taking the union over real slices, we're done. Does this make sense? (I'm running out of space here...) – coma Aug 10 at 1:21 2 I see. It looks like you're considering the pencil of planes through the line $\ell:X+Y+Z=W=0$ on $S$, which cut out the family of conics you're describing. Nice! – René Pannekoek Aug 10 at 2:24 show 7 more comments You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The answer is yes, the rational points on your surface lie dense in the real topology. Let's consider the projective surface $S$ over $\mathbb{Q}$ given by $X^3+Y^3+Z^3-3XYZ-W^3=0$. It contains your surface as an open subset, so to answer your question we might as well show that $S(\mathbb{Q})$ is dense in $S(\mathbb{R})$. Observe that $S$ has a singular rational point $P = (1:1:1:0)$. Since $P$ is singular, the intersection of $S$ with a plane $V$ that contains $P$ is a singular cubic $C_V$, with the rational point $P$ on it. It is a well-known and easy fact that on such curves, the rational points are dense (except if $C_V$ consists of three lines conjugate over $\mathbb{Q}$, but there are only finitely many $V$ for which this happens). Well, in order to approximate any real point $R \in S(\mathbb{R})$ to within distance $\epsilon$, we just pick a plane $V_0$ defined over $\mathbb{Q}$, which we may choose such as to be within distance $\epsilon$ to $R$. But then also the rational points on $C_{V_0}$, which lie dense in its real locus, lie within $\epsilon$ of $R$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359347224235535, "perplexity_flag": "head"}
http://mathoverflow.net/questions/45799/indepence-of-galois-orbits-on-a-product
## indepence of Galois orbits on a product? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let's take $X$ some fixed variety of a fixed base field $k$, which is assumed to be of char.zero for simplicity. Write $\Gamma_k$ for the Galois group of $k$. Given a point $x\in X(\bar{k})$, write $O(x)$ for the $\Gamma_k$-orbit ${\sigma x:\sigma\in\Gamma_k}$, and $ord(x)$ to be the cardinality of $O(x)$. The question is aimed to understand how independent the action of $\Gamma_k$ on different points could be. So consider a pair $(x,y)\in X\times X(\bar{k})$. I wonder if the size of $O(x,y)$ in $X\times X$ would be close to the product $ord(x)\times ord(y)$. This seems impossible in general, but what if one considers a sequence and take limit? Say $(x_n,y_n)$ is a sequence in $X\times X(\bar{k})$ which is generic, in the sense that for any closed subvariety $Y\subsetneq X\times X$ defined over $\bar{k}$, $(x_n,y_n)$ is not in $Y$ for $n$ large. is there any results known of the form $$\lim_n\dfrac{ord(x_n,y_n)}{ord(x)\times ord(y)}=1?$$ The motivation of the question can be found in some equidistribution theorems in arithmetic geometry, see e.g. the works of Szpiro, Ullmo, Zhang, Yuan, etc. In their case no special properties are needed on the product structure, and only the genericity of the sequence of points is assumed. then they embed the base field into the complex number field, and compare the averaged Dirac measure with some canonical measure on the complex locus. But I wonder if the product phenomenon mentioned above happens to be true in some cases. Also for the independece, does it seem to have some probability theoretic interpretations? - It seems like you get equality whenever the stabilizers of $x$ and $y$ in the Galois group have intersection that is sufficiently small. Even you fix $x$, this should hold for generic $y$ (assuming you're not in a special case like $k$ real closed). – S. Carnahan♦ Nov 12 2010 at 8:01 Unless I misunderstood something: if $X=\mathbb{A}^1_k$, your quotient is just the degree of the extension $k(x_n)\cap k(y_n)$ over $k$. – Chris Wuthrich Nov 12 2010 at 9:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9310373663902283, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=2923237
Physics Forums ## Poincare conserved currents : Energy-momentum and Angular-momentum tensors Not sure if this is the right place to ask, but this doubt originated when reading on string theory and so here it goes... The general canonical energy-momentum tensor (as derived from translation invariance), $$T^{\mu\nu}_{C}$$ is not symmetric. Also, the general angular momentum conserved current (as derived from lorentz invariance) consists of two parts to it - the orbital angular momentum component and the spin angular momentum component... $$j^{\mu\nu\rho} = T^{\mu\nu}_{C} x^\rho - T^{\mu\rho}_{C} x^\nu + S^{\mu\nu\rho}$$ But, by taking clues from the above angular momentum expression, we can append a suitable term to the generally non-symmetric canonical energy-momentum tensor and modify it (without breaking its conservation) into a symmetric Belinfante tensor... $$T^{\mu\nu}_{C} \to T^{\mu\nu}_{B}$$ Further, the angular momentum tensor can also be modified to absorb the spin term in its orbital momentum term by rewriting it as... $$j^{\mu\nu\rho} = T^{\mu\nu}_{B} x^\rho - T^{\mu\rho}_{B} x^\nu$$ Now, there is no spin operator at all now - at least not explicitly. And we haven't really modified the physics at all. What is then the spin of the system now? In light cone gauge of bosonic strings, the above (original) spin operator is used to calculate the spins of the massless fields (photon, graviton)... So, what would the above disappearance of the spin operator mean then? If you say that its not done away with but is just hidden in the last expression above, what rule do we follow in separating the orbital and spin components of angular momentum? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Science Advisor [QUOTE] Quote by crackjack Not sure if this is the right place to ask, No, this is not the right place! Don't mention string theory in this part of the PF People in here are too busy, with the LOOPY "quantum gravity" stuff of Rovelli and Smolin, to reply to your question! Any way let us do it. Let us write $$J^{\rho \mu \nu} = L^{\rho \mu \nu} + S^{\rho \mu \nu}$$ where [tex] L^{\rho\mu\nu} = x^{\mu}T_{C}^{\rho\nu} - x^{\nu}T_{C}^{\rho\mu} \ \ \ (1) [/tex] and [tex] S^{\rho\mu\nu} = \frac{\partial \mathcal{L}}{\partial \partial_{\rho}\phi^{r}} (\Sigma^{\mu\nu})^{r}{}_{s}\phi^{s} [/tex] The object L is called by Belinfate the orbital angular momentum density "tensor" because its space components have the form of non-relativistic orbital momentum density. Because the object S clearly expresses the transformation of the internal degree of freedom of the field, it is called (by Belinfate) the "tensor" of internal angular momentum density or spin density. Notice that $S^{\rho\mu\nu}$ does not have the typical form of a classical angular momentum like $L^{\rho\mu\nu}$. In order to show that it can be written in such a form, Belifate (and independently Rosenfeld) introduced the so-called symmetric energy-momentum tensor; $$T_{B}^{\mu\nu} = T_{C}^{\mu\nu} + t^{\mu\nu}$$ with [tex] t^{\mu\nu} = (1/2) \partial_{\rho}( S^{\mu\nu\rho} + S^{\rho\mu\nu} + S^{\nu\mu\rho}) [/tex] Because of the following properties $$T_{B}^{\mu\nu} = T_{B}^{\nu\mu}$$ $$\partial_{\mu}T_{B}^{\mu\nu} = \partial_{\mu}T_{C}^{\mu\nu} = 0$$ $$P^{\mu} = \int T_{B}^{0\mu} \ d^{3}x = \int T_{C}^{0\mu} \ d^{3}x ,$$ $T_{B}^{\mu\nu}$ appears as an equivalent rnergy-momentum tensor. The corresponding new total angular momentum tensor is defined by; $$J_{B}^{\rho\mu\nu} = x^{\mu}T_{B}^{\rho\nu} - x^{\nu}T_{B}^{\rho\mu}$$ This can also be written as $$J_{B}^{\rho\mu\nu} = L^{\rho\mu\nu} + s^{\rho\mu\nu}$$ with $$s^{\rho\mu\nu} = x^{\mu}t^{\rho\nu} - x^{\nu}t^{\rho\mu}$$ having the required form of an angular momentum. Again $$\partial_{\rho}J_{B}^{\rho\mu\nu} = 0$$ $$M^{\mu\nu} = \int J_{B}^{0\mu\nu}\ d^{3}x = \int J^{0\mu\nu} \ d^{3}x$$ and $$S^{\mu\nu} = \int d^{3}x s^{0\mu\nu} = \int d^{3}x S^{0\mu\nu}$$ consequently, the tensor $s^{\rho\mu\nu}$ could equivalently be considered as the spin density instead of $S^{\rho\mu\nu}$; it is the angular momentum of a "spin" energy-momentum density $t^{\mu\nu}$, which does not contribute to the total energy-momentum vector: $$\int d^{3}x \ t^{0\mu} = 0.$$ In this way, the canonical energy-momentum tensor would then represent an "orbital" energy-momentum tensor. Although it does not change the physics ( $M^{\mu\nu}$ and $P^{\mu}$ remain the same and generate the same Poincare algebra) the above Blinfate-Rosenfeld method has at least two weaknesses. The 1st one lies in the fact that the total spin and the total orbital angular momentum are, in general, not covariant quantities. Indeed, the quantities $$S^{\mu\nu} = \int d^{3}x \ s^{0\mu\nu}$$ and $$L^{\mu\nu} = \int d^{3}x \ L^{0\mu\nu}$$ are tensorial only if the corrsponding densities $s^{\rho\mu\nu}$ and $L^{\rho\mu\nu}$ are conserved. This is however the case, if and only if $T_{C}^{\mu\nu}$ is symmetric. That is, the Belinfate spin and orbital angular momentum are sensible quantities only in those cases where the entire symmetrization procedure seems to be superfluous! The 2nd weakness of the B-R method is that it leads some people (I believe you are included) to believe (incorrectly) that a field can only posses non-zero spin if $T_{C}^{\mu\nu}$ is non-symmetric. regards sam Thanks Sam, for the detailed explanation! Quote by samalkhaiat The 2nd weakness of the B-R method is that it leads some people (I believe you are included) to believe (incorrectly) that a field can only posses non-zero spin if $T_{C}^{\mu\nu}$ is non-symmetric. Ya, my confusion is kind of related to this... Why should the orbital and spin angular momentum components be separately covariant? Blog Entries: 3 Recognitions: Gold Member ## Poincare conserved currents : Energy-momentum and Angular-momentum tensors You may find this interesting Code: ```Symmetric energy-momentum tensor in Maxwell, Yang-Mills, and Proca theories obtained using only Noether’s theorem Merced Montesinos and Ernesto Flores (Dated: February 1, 2008) Abstract: The symmetric and gauge-invariant energy-momentum tensors for source-free Maxwell and Yang- Mills theories are obtained by means of translations in spacetime via a systematic implementation of Noether’s theorem. For the source-free neutral Proca field, the same procedure yields also the symmetric energy-momentum tensor. In all cases, the key point to get the right expressions for the energy-momentum tensors is the appropriate handling of their equations of motion and the Bianchi identities. It must be stressed that these results are obtained without using Belinfante’s symmetrization techniques which are usually employed to this end. arXiv:hep-th/0602190v1 20 Feb 2006``` The emphasis is mine. Thanks for the link! That is a neat way of deriving the symmetric stress tensor using Bianchi identity. But I actually don't find Belinfante's procedure to be too hotch potch. In Belinfante's procedure, the spin momentum component can be totally done away with, leaving only the orbital momentum component (in the procedure employed in the above paper, we don't ever arrive at any spin component). But, from what I have read, it is this spin momentum component that is used to determine the (integer) spin in bosonic string theory. This is what is confusing me. Additional note... The above paper also shows that we could arrive at one 'pure orbital' angular momentum tensor without first going through one which has both orbital and spin components (as with Belinfante's procedure). So then, we dont have any rule to split the 'pure orbital' angular momentum tensor to get 'back' the spin component (as samalkhaiat did above). These papers, arXiv:0905.4529 by Andrew Randono and Dave Sloan and the follow-up paper arXiv:0906.1385 by Andrew Randono, may help if you are interested in the way spin is encoded in the gravitational field. The first emphasizes some algebraic properties of intrinsic spin which are not usually covered, and shows how these algebraic properties are encoded in the gravitational field (tetrad) at asymptotic infinity. The algebraic properties reveal how the two tensors are separately covariant, as you asked. The second paper uses the example of a spinor in linearized gravity and shows explicitly how the intrinsic spin is buried in the symmetric stress energy tensor. To unbury it, the paper uses a "gravitational Gordon decomposition" which is a direct analogue of the procedure to extract the spin in electromagnetism. Thanks Ed Rex. I will take a look at them. They seem to be specialized to the case of gravitational fields, but I will see what all generic information I can decouple from this. Montesinos and Flores is the standard way of handling the stress tensor for form-valued fields, rather than canonical + Belinfante. A much more comprehensive treatment of Noether symmetry was already, long before then, covered by Hehl in "Two Lectures on Fermions and Gravity", Hehl. et al. in Geometry and Theoretical Physics; (J. Debrus, A. C. Hirshfeld (Eds.)), Springer-Verlag 1991; sections 6-12 cover the general issue. If a Lagrangian 4-form is a function L(q,dq,x,dx) of form-valued fields q, their exterior differentials dq, the coordinates x and coordinate differentials dx, then you can write the total variational as DL = (Dq)^f + (D(dq))^p + (Dx)^K + (D(dx))^T, which applies generally as D ranges over all natural derivations. The case D = L_m (the Lie derivative of the coordinate vector field d/dx^m) yields an expression for K. the case D = i_m (the contraction of d/dx^m) yields an expression for T. Substituting these in eliminates K and T. Then, the general case D = L_X + delta, where L_X is the infinitesimal diffeomorphism generated by a vector field X and delta an internal symmetry yields the weak form of Noether's theorem for this combined symmetry. You can read off the stress tensor, almost directly, from the result. And that's the correct way to handle the problem. Both an on-shell and off-shell Noether theorem can be easily formulated. The general assumption made is delta(L) = d(L_d) for some 3-form L_d. That generalize the Noether theorem, which takes L_d = 0. Hehl framed "relocalization theorems" in section 10-12. Each one produces a kind of canonical result. One relocalization (mentioned in section 11) is defined as that which zeros out the spin-current. This is one and the same as Belinfante. The second (mentioned in section 12) also zeros out the Noether current associated with scale symmetry. So, you can do a lot more than Belinfante. Belinfante is not the end-all or be-all of relocalization transformations. The Maxwell field yields a symmetric tensor. The Dirac field (which is also treated in the Hehl article) does not. The electromagnetic field has zero spin tensor. The situation is explained in depth in the Hehl article. But the best way to understand it, here, is that photon does *not* have spin. It has helicity (and the helicity it is an invariant). The same would be true for Weyl fermions. The best way to understand this -- and the right way to understand and classify the representations of space-time symmetry groups -- is to work out the coadjoint orbits of the Poincare' group (i.e. write down the symplectic decomposition of the Poisson manifold associated with the Poincare' group). The symplectic leaves associated with luxons (light-speed representations) fall into 3 families: (1) the spin 0 family, (2) the "helion" and (3) the "continuous spin" family. The symplectic leaf which photons (and Weyl particles) belong to is what I called the "helion" family. It has a Pauli-Lubanski vector W that is proportional to the momentum vector P. The symplectic leaf is only *6* dimensional (i.e. it has only *3* Heisenberg pairs, not 4). The helions are spin 0 luxons and spin 0 tardions (both of which have 6-dimensional symplectic leaves) than they are to spin non-zero representations. (Tardion = "slower than light" representations). In contrast, the subfamily of luxons which have what may more properly be called an extra spin degree of freedom reside on the symplectic leaves that are *8* dimensional are what are also known as the "continuous spin" luxons. There, the spatial part of the Pauli Lubanski vector W resides on a cylinder, the width of the cylinder being an invariant. They're never considered as models for any of the known fundamental fields. (For spin non-zero tardions, W resides on a sphere, the radius essentially giving you the spin; the Dirac fermions being the prime example). The class (3) luxons. too, there is no real spin-orbit decomposition, since W isn't on a sphere. They have no position operator, to begin with (a well-known impossibility theorem). But the class is more closely analogous to the spin non-zero particles. Class (2) luxons sorta have a position operator, because you can apply the Darboux theorem to write down the symplectic form as omega = dp_x ^ dx + dp_y ^ dy + dp_z ^ dz. But the coordinates are singular in the same way that the (phi, theta) coordinates of a sphere are. The symplectic representation for the helion class, in fact, is similar to that for the magnetic monopole (something that's developed in LNP 188, "Gauge Symmetries and Fibre Bundles") In fact, the construction used in LNP 188 yields an extra angular momentum term which is analogous to how angular momentum arises for the helion sector out of its helicity. But the angular momentum does *not* come from spin. "The helions are spin 0 luxons and spin 0 tardions" should read "The helions are more akin to spin 0 luxons and spin 0 tardions" Recognitions: Science Advisor Quote by samalkhaiat No, this is not the right place! Don't mention string theory in this part of the PF People in here are too busy, with the LOOPY "quantum gravity" stuff of Rovelli and Smolin, to reply to your question! I can't see how you came to this conclusion. There are a couple of interesting discussions regarding string theory - and there are members participating in the "beyond forum" able to respond on experts-level on string theory questions. Quote by Federation 2005 A much more comprehensive treatment of Noether symmetry was already, long before then, covered by Hehl in "Two Lectures on Fermions and Gravity", Hehl. et al. in Geometry and Theoretical Physics; (J. Debrus, A. C. Hirshfeld (Eds.)), Springer-Verlag 1991; sections 6-12 cover the general issue. ... But the angular momentum does *not* come from spin. Thanks. I understood your post only in patches - will look at the lectures of Hehl et al to fill in my ignorance. Tags belinfante, conserved currents, lorentz, spin, string Thread Tools | | | | |-------------------------------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Poincare conserved currents : Energy-momentum and Angular-momentum tensors | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 0 | | | Classical Physics | 2 | | | Introductory Physics Homework | 8 | | | Introductory Physics Homework | 18 | | | General Physics | 9 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9039138555526733, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/56839/how-to-analyze-applied-forces-torques-of-system-with-multiple-massless-frictionl
# How to Analyze Applied Forces/Torques of System With Multiple Massless/Frictionless Pulleys of Different Radii I have been reviewing the basics of mechanics in preparation of studying Spivak's text book on mechanics (I am from a more mathematical background and I am taking an advanced analytical mechanics course next quarter which is using this text). I recently came across a problem in a text different from the one that I am using (University Physics), and I was having difficulty understanding how to approach it, and indeed even understanding the details of the situation (the problem is stated with little detail). Unfortunately, my text does not seem to cover anything like it, in the exercises or otherwise. Imagine a flat surface inclined at 45 degrees. A 1500 kg block is placed on the incline, and it is (according to the figure) attached directly to a pulley of radius $r$. The pulley itself has a cable rolling over it which extends along the incline with one end connected to the surface and the other end connected to a larger pulley of radius $3r$, but at a point of contact which is only $r$ from the center. Another block of unknown mass $m$ is attached to the larger pulley at the edge ($3r$ from the center) and is allowed to hang. We are asked to compute the mass $m$ of the hanging block required to counteract the motion of the block of mass 1500 kg. (The physical situation [I will try to update this with a picture as soon as I can] is basically just like the traditional one pulley/two-block system on an incline first encountered with problems involving Newton's laws; except here there are two pulleys involved as described above). To be perfectly honest, I am really not at all sure what is going on in this problem. If we take the axis of rotation to be the center of the larger pulley, then it is clear the second block imparts a torque of $3rmg\sin\left(\frac{\pi}{4}\right)$ and a tension on the cable is just $mg$. It seems to be the block on the incline imparts no torque since its action is directed parallel to the radial coordinate of the axis (if we are taking the natural coordinate system with axes perpendicular and parallel to the axis of rotation). What is the significance of the smaller pulley to which the mass is attached? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9576636552810669, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/3283/why-do-i-always-get-1-when-i-keep-hitting-the-square-root-button-on-my-calculat/3297
# “Why do I always get 1 when I keep hitting the square root button on my calculator?” I asked myself this question when I was a young boy playing around with the calculator. Today, I think I know the answer, but I'm not sure whether I'd be able to explain it to a child or layman playing around with a calculator. Hence I'm interested in answers suitable for a person that, say, knows what the square root of a square number is, but doesn't know about sequences, functions, convergence and the like. - 5 If you think this question should be CW, vote up this comment and I'll change it. – Rasmus Aug 25 '10 at 16:02 1 The technical way of putting it is that 1 is a "fixed point" of the square root function. – J. M. Aug 25 '10 at 16:08 6 @J. Mangaldan: it is not enough to say that 1 is a fixed point; 1 also needs to be attractive rather than repellent. – Qiaochu Yuan Aug 25 '10 at 16:14 2 (For example, 0 is also a fixed point, but on [0, infty) it is repellent.) – Qiaochu Yuan Aug 25 '10 at 16:15 1 You need to add that it's the unique (attractive) fixed point on (0, Infinity) and that (0, Infinity) lies within (actually is) its domain of attraction. – whuber Aug 25 '10 at 19:23 show 3 more comments ## 7 Answers Here is how I had justified it to myself when I was a kid (I was convinced :-)) If $x > 1$ then $\sqrt x > 1$ and $x > \sqrt x$. So we keep reducing the number while still being $> 1$. Also, we cannot end up at a number $>1$ as then taking the square root would reduce it. If we do end up at $1$, we stay there. Since the calculator has a limited precision, we end up at $1$, and pretty quickly. Of course, the assumption here is that we do end up somewhere :-) (Which seemed justified by the fact that the first non-zero digit after the decimal point always seemed to reduce) - So, how did you manage to convince yourself that taking square roots of fractions repeatedly has the same behavior? (Just kidding, you replace > with < of course). – J. M. Aug 25 '10 at 16:24 1 @J. Mangaldan: By considering $\frac{1}{x}$. – Aryabhata Aug 25 '10 at 16:30 You were a pretty smart kid! Up to the statement that any decreasing bounded below sequence has a limit, that is a perfectly good argument. And that statement more or less comes down to how you define real numbers in the first place... – David Speyer Aug 25 '10 at 18:18 @David: Thanks! Unfortunately, there has only been a decline :-( :-) You are right, it can be made rigorous. I suppose it comes from the axiom of existence of a supremum for subsets of R which are bounded above (or infimum). – Aryabhata Aug 25 '10 at 18:38 Make a picture of the usual spiral converging to the fixed point, in the style of Where does this little obsession aparent on this site with 'layman' explanations? :P Later. To make the picture it is better to use the cosine (so you do not even have to make the picture because that is what's in the wikipedia page) mainly because the iteration is seen more clearly: for the square root, the sequence converges boringly and fast to be interesting. PS. You can draw this kind of pictures, assuming you have acces to Mathematica, with the following code: ````f[t_] := Cos[t]; g = Plot[{f[t], t}, {t, -\[Pi]/2, \[Pi]/2}, AspectRatio -> 1, PlotStyle -> {Thick}, Epilog -> Module[{pts = NestList[f, .1, 10]}, { Thin, PointSize[0.015], Line@ Flatten[Map[{#, {#[[2]], #[[2]]}} &, Partition[pts, 2, 1], 1], 1], Red, Point[Partition[pts, 2, 1]] } ] ] ```` - 2 There's a picture... therefore, +1. – J. M. Aug 25 '10 at 16:15 1 I was even thinking about creating a layman tag. :) – Rasmus Aug 25 '10 at 16:19 Admittedly, the question about turning your clothes right-side-out was way wittier. – Rasmus Aug 25 '10 at 16:32 4 Notice that this picture is not of the correct function. The question was about square root, not cosine! – David Speyer Aug 25 '10 at 18:19 3 @David, sure (and hence the "in the style of"), but the one for the square root is too boring, and the cosine also has a key in most calculators! – Mariano Suárez-Alvarez♦ Aug 25 '10 at 18:21 If they know about logarithms, try this: • taking repeated square roots of a positive real number is the same as repeatedly dividing its logarithm by 2, • repeatedly dividing something by 2 gets you to 0, and • the exponential of 0 is 1. - 7 ... and wave hands about continuity :) – Mariano Suárez-Alvarez♦ Aug 25 '10 at 16:30 3 Sure. But it is easy to convince someone with a calculator that the exponential of something close to 0 is close to 1. – Qiaochu Yuan Aug 25 '10 at 16:40 I'd have said raising something to 0 is the same as dividing a number by itself... but that works too. – J. M. Aug 25 '10 at 16:42 @Mariano: but that's certainly far less hand-waving than your "proof by picture". All that hand-waving - esp combined with the spiraling - is RSI exacerbating merely thinking about it! – Gone Aug 25 '10 at 21:41 Dualizing reduces it to: repeatedly squaring $1+\epsilon$ goes to $\infty$. But that's clear by the Binomial theorem. Namely: given any positive $\;\epsilon, \;$ we can force $\;\;\; 2^n\epsilon > x \;\;$ by choosing $\: n \:$ large enough. Therefore we have that $\;\;\; (1+\epsilon)^{2^n} \ge 1+2^n \epsilon > x \ge 1\quad$ by the Binomial Theorem. So, taking $\;2^n\:$'th roots: $\;\;\; 1+\epsilon \;\ge\; x^{1/2^n} \;\ge\; 1. \;\;$ Hence $\; x^{1/2^n}\to 1\;$ as $\;n\to\infty$ - Nice! You don't even need the Binomial Theorem; you only need to demonstrate that (1+e)^2 > 1 + 2e, which is obvious. – whuber Aug 25 '10 at 20:26 Indeed, one could trade off the Binomial theorem for iterating that inequality. But probably anyone capable of understanding this already knows the Binomial Theorem. – Gone Aug 25 '10 at 20:43 A nice series of solutions has already been given. It's hard to improve on any of them. But because this is an elementary question, it deserves an elementary answer. (Although "elementary" is subjective, being able to punch buttons on a calculator requires little mathematical sophistication, so we should attempt to minimize the formality and maximize the intuition in the explanation.) When you begin with a value x on the calculator with x > 1, note that its square root lies between 1 and the midpoint of the interval [1, x]. (Proofs: The midpoint is (x+1)/2. Because ((x+1)/2)^2 - x = ((x-1)/2)^2 >= 0, the midpoint is no greater than the root of x. Non-algebraic version: draw graphs of the relations x = y^2 and y = (1+x)/2 on the same plot and note that the latter never falls below the former.) It follows immediately (by induction if you want to be formal) that the iterated root, although never falling below 1, decreases faster than the iterated average with 1, which obviously (on geometric or arithmetic grounds) converges to 1. The case of 0 < x < 1 is reduced to the case 1/x > 1 because Sqrt(1/x) = 1/Sqrt(x). - I think a better question is to analyze how the roots of a number X approach 1, depending on X. I.e. xa ~ 1 + a(lnX). Not only does this introduce the log, it gives a natural derivation of e which is easy to remember. I never understood what was "natural" about the natural log until I was an undergraduate, and I think the real value of this sequence is the way it leads so memorably and naturally to the derivation of e. It's easy to see why you arrive at one from repeatedly taking square roots (as moron answered), since the sequence is strictly decreasing and bounded below by 1. I think a motivated person of almost any age can understand that, as long as it's explained slowly and clearly. - ````if(x == 0): sqrt(x) == 0 if(0 < x < 1): x < sqrt(x) < 1 if(x == 1): sqrt(x) == 1 if(1 < x ): 1 < sqrt(x) < x ```` Except for the cases `x == 0` and `x == 1`, `x != sqrt(x)` so it must move, and it must move towards 1, so it will approach 1 to any accuracy you choose. - 1 There's a logical gap here. It's consistent with all your assertions, for example, for iterated square roots of x > 2 to converge down to 2 and for iterated roots of x, 1 < x <= 2, to converge down to 1. At a minimum you need to invoke continuity. – whuber Aug 26 '10 at 1:01 If you add something about how far towards 1 it moves that would solve the problem, but I'm to lazy to figure out how to add that. – BCS Aug 26 '10 at 1:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440147280693054, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/12/18/real-valued-functions-of-a-single-real-variable/?like=1&source=post_flair&_wpnonce=c1c9c39881
# The Unapologetic Mathematician ## Real-Valued Functions of a Single Real Variable At long last we can really start getting into one of the most basic kinds of functions: those which take a real number in and spit a real number out. Quite a lot of mathematics is based on a good understanding of how to take these functions apart and put them together in different ways — to analyze them. And so we have the topic of “real analysis”. At our disposal we have a toolbox with various methods for calculating and dealing with these sorts of functions, which we call “calculus”. Really, all calculus is is a collection of techniques for understanding what makes these functions tick. Sitting behind everything else, we have the real number system $\mathbb{R}$ — the unique ordered topological field which is big enough to contain limits of all Cauchy sequences (so it’s a complete uniform space) and least upper bounds for all nonempty subsets which have any upper bounds at all (so the order is Dedekind complete), and yet small enough to exclude infinitesimals and infinites (so it’s Archimedean). Because the properties that make the real numbers do their thing are all wrapped up in the topology, it’s no surprise that we’re really interested in continuous functions, and we have quite a lot of them. At the most basic, the constant function $f(x)=1$ for all real numbers $x$ is continuous, as is the identity function $f(x)=x$. We also have ways of combining continuous functions, many of which are essentially inherited from the field structure on $\mathbb{R}$. We can add and multiply functions just by adding and multiplying their values, and we can multiply a function by a real number too. • $\left[f+g\right](x)=f(x)+g(x)$ • $\left[fg\right](x)=f(x)g(x)$ • $\left[cf\right](x)=cf(x)$ Since all the nice properties of these algebraic constructions carry over from $\mathbb{R}$, this makes the collection of continuous functions into an algebra over the field of real numbers. We get additive inverses as usual in a module by multiplying by $-1$, so we have an $\mathbb{R}$-module using addition and scalar multiplication. We have a bilinear multiplication because of the distributive law holding in the ring $\mathbb{R}$ where our functions take their values. We also have a unit for multiplication — the constant function $1$ — and a commutative law for multiplication. I’ll leave you to verify that all these operations give back continuous functions when we start with continuous functions. What we don’t have is division. Multiplicative inverses are tough because we can’t invert any function which takes the value zero anywhere. Even the identity function $f(x)=\frac{1}{x}$ is very much not continuous at $x=0$. In fact, it’s not even defined there! So how can we deal with this? Well, the answer is sitting right there. The function $\frac{1}{x}$ is not continuous at that point. We have two definitions (by neighborhood systems and by nets) of what it means for a function between two topological spaces to be continuous at one point or another, and we said a function is continuous if it’s continuous at every point in its domain. So we can throw out some points and restrict our attention to a subspace where the function is continuous. Here, for instance, we can define a function $f:\mathbb{R}\setminus\{0\}\rightarrow\mathbb{R}$ by $f(x)=\frac{1}{x}$, and this function is continuous at each point in its domain. So what we should really be considering is this: for each subspace $X\subseteq\mathbb{R}$ we have a collection $C^0(X)$ of those real-valued functions which are continuous on $X$. Each of these is a commutative $\mathbb{R}$-algebra, just like we saw for the collection of functions continuous on all of $\mathbb{R}$. But we may come up with two functions over different domains that we want to work with. How do we deal with them together? Well, let’s say we have a function $f\in C^0(X)$ and another one $g\in C^0(Y)$, where $Y\subseteq X$. We may not be able to work with $g$ at the points in $X$ that aren’t in $Y$, but we can certainly work with $f$ at just those points of $X$ that happen to be in $Y$. That is, we can restrict the function $f$ to the function $f|_Y$. It’s the exact same function, except it’s only defined on $Y$ instead of all of $X$. This gives us a homomorphism of $\mathbb{R}$-algebras $\underline{\hphantom{X}}|_Y:C^0(X)\rightarrow C^0(Y)$. (If you’ve been reading along for a while, how would a category theorist say this?) As an example, we have the identity function $f(x)=x$ in $C^0(\mathbb{R})$ and the reciprocal function $g(x)=\frac{1}{x}$ in $C^0(\mathbb{R}\setminus\{0\})$. We can restrict the identity function by forgetting that it has a value at ${0}$ to get another function $f|_{\mathbb{R}\setminus\{0\}}$, which we will also denote by $x$. Then we can multiply $f|_{\mathbb{R}\setminus\{0\}}g$ to get the function $1\in C^0(\mathbb{R}\setminus\{0\})$. Notice that the resulting function we get is not the constant function on $\mathbb{R}$ because it’s not defined at ${0}$. Now as far as language goes, we usually drop all mention of domains and assume by default that the domain is “wherever the function makes sense”. That is, whenever we see $\frac{1}{x}$ we automatically restrict to nonzero real numbers, and whenever we combine two functions on different domains we automatically restrict to the intersection of their domains, all without explicit comment. We do have to be a bit careful here, though, because when we see $\frac{x}{x}$, we also restrict to nonzero real numbers. This is not the constant function $1:\mathbb{R}\rightarrow\mathbb{R}$ because as it stands it’s not defined for $x=0$. Clearly, this is a little nutty and pedantic, so tomorrow we’ll come back and see how to cope with it. ### Like this: Posted by John Armstrong | Analysis, Calculus ## 2 Comments » 1. [...] Okay, so we’ve got one of our real-valued functions defined on some domain : . Let’s start analyzing [...] Pingback by | December 21, 2007 | Reply 2. [...] Clearly we can restrict such a function to whatever open subset we want — and, in fact, we have. The nice thing is that this gives us a way of talking about and dealing with functions on our [...] Pingback by | March 16, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 53, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313095808029175, "perplexity_flag": "head"}
http://mathoverflow.net/questions/98757/schur-functors-generalization-to-jack-hall-littlewood-macdonald-functors/98761
## Schur functors generalization to “Jack”, “Hall-Littlewood”, “Macdonald” functors ? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Schur functors are functors from the category of vector spaces to itself. If we take an operator $M: V->V$ and apply a Schur functor to it and then calculate trace $Tr(M^{\Lambda})$ we will get Schur polynomial in the eigenvalues of $M$. Question Can one generalize (deform) Schur functors, such that $Tr(M^{\Lambda})$ will give polynomials which generalize (deform) Schur polynomials e.g. Hall-Littlewood polynomial, or Jack polynomial and most generally Macdonald polynomials ? - 1 Have you seen arxiv.org/abs/q-alg/9503012 ? – Gjergji Zaimi Jun 4 at 9:05 @Gjergii Thank you I know, may be I forget something now, but it seems to me it is not the answer. Why do I need intertwiner? is it natural ? It does seems to me so. Morever you will need to take very specific representation to obtain Calogero model (which corresponds to Jack polynoms, respectively in q-case to Macdonalds)... – Alexander Chervov Jun 4 at 14:14 ## 1 Answer It seems to me that this is answered, perhaps in a boring way, by Haiman's work on the $n!$-conjecture (now a theorem due to Haiman). For any partition $\lambda$, Haiman constructs a finite dimensional graded module $C_{\lambda}$ for $\mathbb{C}[x,y][S_n]$. (The group elements commute with $x$ and $y$, and $x$ and $y$ commute with each other.) The doubly graded Frobenius character of $C_{\lambda}$ is the $\lambda$-Macdonald polynomial. Now just use Schur-Weyl duality: Define the functor $F_{\lambda}$ from vector spaces to vector spaces by $$V \mapsto V^{\otimes |\lambda|} \otimes_{\mathbb{C}[S_n]} C_{\lambda}.$$ The result is a doubly graded $\mathbb{C}[x,y]$ module which is the sum of Schur functors corresponding to the Macdonald polynomial. - @David thank you very much ! It is unexpected and interesting form me ! (What is "Frobenius character") Actually I hoped something like a deformation e.g. vector spaces will be the same but morphisms somehow deformed. What do you think is it possible have something like this ? Morally this should correspond to quantum groups - take V as basic representation of U_q(gl). Send "V" to irrep of U_q(gl) which corresponds to Young diagram Lambda. But it is not clear for me how to make functor from this... – Alexander Chervov Jun 4 at 14:23 The "Frobenius character" is the standard map which sends an $S_n$-representation to a symmetric polynomial. The funny thing is that it is NOT a character -- in particular, the Frobenius character of a tensor product is not the product of the Frobenius characters. The relationship goes through Schur-Weyl duality: If $M$ is an $S_n$-rep with Frob. character $f$, then $V^{\otimes n} \otimes_{k[S_n]} M$, as a $GL(V)$-rep, has character $f$. – David Speyer Jun 4 at 15:10 @David is it the same thing which is called "characteristic map" ? arxiv.org/abs/1112.0620 Characteristic maps for the Brauer algebra A. I. Molev, N. Rozhkovskaya – Alexander Chervov Jun 4 at 16:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8999674916267395, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/138692/two-opposite-events-fill-whole-probability-event-space-a-process-selects-them?answertab=votes
# Two opposite events fill whole probability event space, a process selects them [duplicate] Possible Duplicate: Probability of two opposite events Suppose there is string of eight bits, e.g.: 00100110 Bits are randomly chosen from the string. Location of bit (in the string) does not influence its selection probability. Probability of choosing $0$: $p_0 = \frac{5}{8} = 0.625$ Prob. of choosing $1$: $p_1 = \frac{3}{8} = 0.375$ Suppose there is an ongoing process of selecting $0$ and 1. So at each moment, $0$ or 1 is selected, and represents current state C of the process. Probability of choosing opposite state (to current state C), and then again opposite state – such complex event is called cycle – is given with: $$p_\text{cycle} = p_0 \cdot p_1 \space\space\space (1)$$ Question: define $p_a = p_{cycle}$. Then, opposite event is $p_b = 1 - p_{cycle}$. We have again two opposite events. How will the $p_b$ look like? I.e. what sequences of $0$ and $1$ will belong to events of kind A and events of kind B. I have problem defining B-set. - 1 Looks okay to me, assuming that the choices are independent. – Brian M. Scott Apr 30 '12 at 2:10 If you're not happy with the answers you got to the earlier version of this question, don't go posting a new one - just edit the old one. – Gerry Myerson Apr 30 '12 at 2:41 @BrianM.Scott, Gerry: I have modified the question. It is now clearly different. – Mooncer Apr 30 '12 at 4:35 ## marked as duplicate by Dilip Sarwate, Gerry Myerson, William, Thomas, Noah SnyderOct 4 '12 at 22:53 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 1 Answer This is an answer to the updated question. Let $C$ be the current state, either $0$ or $1$. Let $\overline C$ be the opposite state, either $1$ or $0$, respectively. Let $E$ be the event not-cycle. Then $E$ occurs when the next choice is $C$, or when the next choice is $\overline C$ and the choice after that is also $\overline C$. If $C=0$, $E$ occurs if the next choice is $0$, or if the next two choices are both $1$; the probability of this is $p_0+p_1^2$. If $C=1$, $E$ occurs if the next choice is $1$, or if the next two choices are both $0$; the probability of this is $p_1+p_0^2$. Now the probability of being in state $0$ at any given time is $p_0$, and the probability of being in state $1$ is $p_1$. Thus, the probability of being in state $0$ and having $E$ occur is $p_0(p_0+p_1^2)$, and the probability of being in state $1$ and having $E$ occur is $p_1(p_0+p_1^2)$. Combining the two, we find that the probability of $E$ is $$\begin{align*}p_0(p_0+p_1^2)+p_1(p_1+p_0^2)&=p_0^2+p_0p_1^2+p_1^2+p_1p_0^2\\ &=p_0^2+p_0p_1(p_0+p_1)+p_1^2\\ &=p_0^2+p_0p_1+p_1^2\;, \end{align*}$$ since $p_0+p_1=1$. And this agrees with your calculation of $p_0p_1$ as the probability of a cycle, since $$(p_0^2+p_0p_1+p_1^2)+p_0p_1=p_0^2+2p_0p_1+p_1^2=(p_0+p_1)^2=1\;:$$ the probabilities of cycle and not-cycle must add up to $1$. - I will use regex-like symbol "+" to denote "one or more". I think that $p_1p_0$ means (for $C=0$) "wait for 1 (i.e. $\overline C$), then wait for 0", and matches sequence of randomly choosen bits $0^+1^+0$. For $\overline C$ it is $1^+0^+1$. Basing on your answer, the not-cycle is: $0^+$, $0^+1^+$, $1^+$, $1^+0^+$? – Mooncer Apr 30 '12 at 16:11 @Steffen: No, $p_1p_0$ is the probability that the next two choices are $1$ and $0$ in that order; no waiting is involved. – Brian M. Scott Apr 30 '12 at 16:13 If a process would select $0$ and 1 with $p_0, p_1$, and we would observe its output string, then the event: "1 observed" would mean that we wait – possibly observing $0$? And 1 would be observed ("inside" the zeroes) with prob. \$p_1? – Mooncer Apr 30 '12 at 17:06 @Steffen: All of the calculations above, both yours and mine, refer to observing the very next output or the next two outputs; no waiting is involved. – Brian M. Scott Apr 30 '12 at 17:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526413083076477, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/tagged/contact-geometry
## Tagged Questions 1answer 118 views ### the existence of (almost) contact (metric) structure I am trying to understand some basic facts about (almost) contact (metric) structure, especially on 3-manifolds 5-manifolds. (1) I saw statement that "any compact oriented 3-manif … 0answers 60 views ### Behavior of Reeb vector field (or almost contact 1-form), and “Contact instanton” I am having trouble "visualizing" the behavior of the two objects mentioned in title. And the question I'm raising might be vague in some sense (or too specific). Also I am hope fo … 1answer 118 views ### Normalized Hamiltonian holomorphic vector fields on Sasakian manifolds Hello, I am reading the paper Futaki; Ono; Wang Transverse Kähler geometry of Sasaki manifolds and toric Sasaki-Einstein manifolds. J. Differential Geom. 83 (2009), no. 3, 585–635. … 1answer 399 views ### What is knot contact homology? Recently, it was conjectured by the paper of Aganagic and Vafa that the $Q$-deformed $A$-polynomials can be identified with the augmentation polynomials of the knot contact homolog … 4answers 431 views ### When does a hypersurface have contact-type? In a symplectic manifold $(X^{2n},\omega)$, a hypersurface $Y\subset X$ has contact-type if there is a contact form $\lambda$ such that $d\lambda=\omega|_Y$. Recall that a contact … 0answers 138 views ### From convex geometry to contact topology Here is a problem in contact topology that was suggested by Petya's answer to this mathoverflow question of mine. Let $S^* \mathbb{R}^n$ be the space of cooriented contact element … 1answer 509 views ### ‘Contactization’ and Symplectization Given a contact manifold $(M,\lambda)$ we can pass to the symplectization $(\mathbb{R}\times M,\omega=d(e^s\lambda))$ and this is great to bring the machinery of symplectic geometr … 3answers 503 views ### Thom polynomial for contact algebraic structures Let's consider a algebraic contact structure $P$ on $\mathbb CP^3$ and a algebraic curve $C$ degree $d$ and genus $g$. Let's assume that contact structure has degree $p$ (see http: … 2answers 342 views ### strong contactomorphism group inside contactomorphism group Let $(M, \xi)$ be a closed contact manifold with co-oriented contact structure $\xi = \ker \alpha$. Let $\mathrm{Cont}(M, \alpha)$ be the group of diffeomorphisms that preserve th … 1answer 275 views ### Osculating spaces and distributions on (real) Grassmannian manifold Hello! Recenlty, doing my research, I came across a quite natural construction, and I would like to know more about it. Unfortunately, being not expert neither in Grassmannians nor … 1answer 132 views ### Why is tb < 0 for boundary of a convex surface? Why is $tb(K)$ (Thurston-Bennequin invariant) of a Legendrian knot $K$ which is the boundary of a convex surface $\Sigma$ is negative in a contact 3 manifold? 4answers 971 views ### What is the role of contact geometry in the hamiltonian mechanics? Let us assume someone is interested in the study of Hamiltonian mechanics. What are good examples to illustrate him of the usefulness of contact geometry in this context? On one h … 1answer 267 views ### “Rounding the corners” to get contact boundary Suppose we have symplectic manifolds $(M_1, \omega_1)$ and $(M_2, \omega_2)$ with non-empty boundary of contact . Often we need to deal with the product $M_1 \times M_2$ with the p …
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9080015420913696, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/136179-more-exponential-growth.html
# Thread: 1. ## more exponential growth! ok, so, the population of a city is growing exponentially with the function P(t)=P0e^kt the population doubled in the first 30 years. a) Firstly, find k. My answer was 0.023 but I believe that value is incorrect as I wasn't sure whether to do 2P0 (as the population doubled) or leave P0 as it is when working it out. b.) By what factor will the original population have grown in the initial 120 years c) When will the original population size have grown by a factor of 10? how do I do factors? cheers 2. Originally Posted by meesukj ok, so, the population of a city is growing exponentially with the function P(t)=P0e^kt the population doubled in the first 30 years. a) Firstly, find k. My answer was 0.023 but I believe that value is incorrect as I wasn't sure whether to do 2P0 (as the population doubled) or leave P0 as it is when working it out. b.) By what factor will the original population have grown in the initial 120 years c) When will the original population size have grown by a factor of 10? how do I do factors? cheers a) $P(t) = P_0e^{kt}$. You also know that in 30 years, the population doubled. So $P(30) = 2P(0)$ $P_0e^{30k} = 2P_0e^{0k}$ $P_0e^{30k} = 2P_0$ $e^{30k} = 2$ $30k = \ln{2}$ $k = \frac{\ln{2}}{30}$. Therefore $P(t) = P_0 e^{\frac{t\ln{2}}{30}}$. b) In 120 years: $P(120) = P_0 e^{\frac{120\ln{2}}{30}}$ $= P_0 e^{40\ln{2}}$ $= P_0 e^{\ln{2^{40}}}$ $= 2^{40}P_0$. So the initial population has grown by a factor of $2^{40}$. c) To have grown by a factor of 10: $P_0e^{\frac{t\ln{2}}{30}} = 10P_0$ $e^{\frac{t\ln{2}}{30}} = 10$ $\frac{t\ln{2}}{30} = \ln{10}$ $t\ln{2} = 30\ln{10}$ $t = \frac{30\ln{10}}{\ln{2}}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316281080245972, "perplexity_flag": "middle"}
http://en.wikisource.org/wiki/The_Mathematical_Principles_of_Natural_Philosophy_(1846)/BookI-VI
# The Mathematical Principles of Natural Philosophy (1846)/BookI-VI From Wikisource by Isaac Newton, translated by Andrew Motte Book I, Section VI. The Mathematical Principles of Natural Philosophy (1846) — Book I, Section VI. Isaac Newton Andrew Motte SECTION VI. How the motions are to be found in given orbits. PROPOSITION XXX. PROBLEM XXII. To find at any assigned time the place of a body moving in, a given parabolic trajectory. Let S be the focus, and A the principal vertex of the parabola; and suppose 4AS $\scriptstyle\times$ M equal to the parabolic area to be cut off APS, which either was described by the radius SP, since the body's departure from the vertex, or is to be described thereby before its arrival there. Now the quantity of that area to be cut off is known from the time which is proportional to it. Bisect AS in G, and erect the perpendicular GH equal to BM, and a circle described about the centre H, with the interval HS, will cut the parabola in the place P required. For letting fall PO perpendicular on the axis, and drawing PH, there will be AG² + GH² (= HP² = $\scriptstyle\overline{AO-AG|^{2}}$ + $\scriptstyle\overline{PO-GH|^{2}}$) = AO² + PO² - 2GAO + 2 GH + PO + AG² + GH². Whence 2GH $\scriptstyle\times$ PO (=AO² + PO² - 2GAO) = AO² + ¾PO². For AO² write $\scriptstyle AO\times\frac{PO^{2}}{4AS}$; then dividing all the terms by 2PO, and multiplying them by 2AS, we shall have 4/3GH $\scriptstyle\times$ AS (= 1/6AO $\scriptstyle\times$ PO + ½AS $\scriptstyle\times$ PO = $\scriptstyle\frac{AO+3AS}{6}\times$ PO = $\scriptstyle\frac{4AO+3SO}{6}\times$ PO = to the area $\scriptstyle\overline{APO-SPO|}$ = to the area APS. But GH was 3M, and therefore 4/3GH $\scriptstyle\times$ AS is 4AS $\scriptstyle\times$ M. Wherefore the area cut off APS is equal to the area that was to be cut off 4AS $\scriptstyle\times$ M.   Q.E.D. Cor. 1. Hence GH is to AS as the time in which the body described the arc AP to the time in which the body described the arc between the vertex A and the perpendicular erected from the focus S upon the axis. Cor. 2. And supposing a circle ASP perpetually to pass through the moving body P, the velocity of the point H is to the velocity which the body had in the vertex A as 3 to 8; and therefore in the same ratio is the line GH to the right line which the body, in the time of its moving from A to P, would describe with that velocity which it had in the vertex A. Cor. 3. Hence, also, on the other hand, the time may be found in which the body has described any assigned arc AP. Join AP, and on its middle point erect a perpendicular meeting the right line GH in H. LEMMA XXVIII. There is no oval figure whose area, cut off by right lines at pleasure, can be universally found by means of equations of any number of finite terms and dimensions. Suppose that within the oval any point is given; about which as a pole a right line is perpetually revolving with an uniform motion, while in that right line a moveable point going out from the pole moves always forward with a velocity proportional to the square of that right line with in the oval. By this motion that point will describe a spiral with infinite circumgyrations. Now if a portion of the area of the oval cut off by that right line could be found by a finite equation, the distance of the point from the pole, which is proportional to this area, might be found by the same equation, and therefore all the points of the spiral might be found by a finite equation also; and therefore the intersection of a right line given in position with the spiral might also be found by a finite equation. But every right line infinitely produced cuts a spiral in an infinite number of points; and the equation by which any one intersection of two lines is found at the same time exhibits all their intersections by as many roots, and therefore rises to as many dimensions as there are intersections. Be cause two circles mutually cut one another in two points, one of those intersections is not to be found but by an equation of two dimensions, by which the other intersection may be also found. Because there may be four intersections of two conic sections, any one of them is not to be found universally, but by an equation of four dimensions, by which they may be all found together. For if those intersections are severally sought, be cause the law and condition of all is the same, the calculus will be the same in every case, and therefore the conclusion always the same; which must therefore comprehend all those intersections at once within itself, and exhibit them all indifferently. Hence it is that the intersections of the conic scions with the curves of the third order, because they may amount to six, come out together by equations of six dimensions; and the intersections of two curves of the third order, because they may amount to nine, come out together by equations of nine dimensions. If this did not necessarily happen, we might reduce all solid to plane Problems, and those higher than solid to solid Problems. But here I speak of curves irreducible in power. For if the equation by which the curve is defined may be reduced to a lower power, the curve will not be one single curve, but composed of two, or more, whose intersections may be severally found by different calculusses. After the same manner the two intersections of right lines with the conic sections come out always by equations of two dimensions; the three intersections of right lines with the irreducible curves of the third order by equations of three dimensions; the four intersections of right lines with the irreducible curves of the fourth order, by equations of four dimensions; and so on in infinitum. Wherefore the innumerable intersections of a right line with a spiral, since this is but one simple curve and not reducible to more curves, require equations infinite in number of dimensions and roots, by which they may be all exhibited together. For the law and calculus of all is the same. For if a perpendicular is let fall from the pole upon that intersecting right line, and that perpendicular together with the intersecting line revolves about the pole, the intersections of the spiral will mutually pass the one into the other; and that which was first or nearest, after one revolution, will be the second; after two, the third; and so on: nor will the equation in the mean time be changed but as the magnitudes of those quantities are changed, by which the position of the intersecting line is determined. Wherefore since those quantities after every revolution return to their first magnitudes, the equation will return to its first form; and consequently one and the same equation will exhibit all the intersections, and will therefore have an infinite number of roots, by which they may be all exhibited. And therefore the intersection of a right line with a spiral cannot be universally found by any finite equation; and of consequence there is no oval figure whose area, cut off by right lines at pleasure, can be universally exhibited by any such equation. By the same argument, if the interval of the pole and point by which the spiral is described is taken proportional to that part of the perimeter of the oval which is cut off; it may be proved that the length of the perimeter cannot be universally exhibited by any finite equation. But here I speak of ovals that are not touched by conjugate figures running out in infinitum. Cor. Hence the area of an ellipsis, described by a radius drawn from the focus to the moving body, is not to be found from the time given by a finite equation; and therefore cannot be determined by the description of curves geometrically rational. Those curves I call geometrically rational, all the points whereof may be determined by lengths that are definable by equations; that is, by the complicated ratios of lengths. Other curves (such as spirals, quadratrixes, and cycloids) I call geometrically irrational. For the lengths which are or are not as number to number (according to the tenth Book of Elements) are arithmetically rational or irrational. And therefore I cut off an area of an ellipsis proportional to the time in which it is described by a curve geometrically irrational, in the following manner. PROPOSITION XXXI. PROBLEM XXIII. To find the place of a body moving in a given elliptic trajectory at any assigned time. Suppose A to be the principal vertex, S the focus, and O the centre of the ellipsis APB; and let P be the place of the body to be found. Produce OA to G so as OG may be to OA as OA to OS. Erect the perpendicular GH; and about the centre O, with the interval OG, describe the circle GEF; and on the ruler GH, as a base, suppose the wheel GEF to move forwards, revolving about its axis, and in the mean time by its point A describing the cycloid ALI. Which done, take GK to the perimeter GEFG of the wheel, in the ratio of the time in which the body proceeding from A described the arc AP, to the time of a whole revolution in the ellipsis. Erect the perpendicular KL meeting the cycloid in L; then LP drawn parallel to KG will meet the ellipsis in P, the required place of the body. For about the centre O with the interval OA describe the semi-circle AQB, and let LP, produced, if need be, meet the arc AQ in Q, and join SQ, OQ. Let OQ meet the arc EFG in F, and upon OQ let fall the perpendicular SR. The area APS is as the area AQS, that is, as the difference between the sector OQA and the triangle OQS, or as the difference of the rectangles ½OQ $\times$ AQ, and ½OQ $\times$ SR, that is, because ½OQ is given, as the difference between the arc AQ and the right line SR; and therefore (because of the equality of the given ratios SR to the sine of the arc AQ, OS to OA, OA to OG, AQ to GF; and by division, AQ - SR to GF - sine of the arc AQ) as GK, the difference between the arc GF and the sine of the arc AQ.   Q.E.D. SCHOLIUM. But since the description of this curve is difficult, a solution by approximation will be preferable. First, then, let there be found a certain angle B which may be to an angle of 57,29578 degrees, which an arc equal to the radius subtends, as SH, the distance of the foci, to AB, the diameter of the ellipsis. Secondly, a certain length L, which may be to the radius in the same ratio inversely. And these being found, the Problem may be solved by the following analysis. By any construction (or even by conjecture), suppose we know P the place of the body near its true place p. Then letting fall on the axis of the ellipsis the ordinate PR from the proportion of the diameters of the ellipsis, the ordinate RQ of the circumscribed circle AQB will be given; which ordinate is the sine of the angle AOQ, supposing AO to be the radius, and also cuts the ellipsis in P. It will be sufficient if that angle is found by a rude calculus in numbers near the truth. Suppose we also know the angle proportional to the time, that is, which is to four right angles as the time in which the body described the arc Ap, to the time of one revolution in the ellipsis. Let this angle be N. Then take an angle D, which may be to the angle B as the sine of the angle AOQ to the radius; and an angle E which may be to the angle N - AOQ + D as the length L to the same length L diminished by the cosine of the angle AOQ, when that angle is less than a right angle, or increased thereby when greater. In the next place, take an angle F that may be to the angle B as the sine of the angle AOQ + E to the radius, and an angle G, that may be to the angle N - AOQ - E + F as the length L to the same length L diminished by the cosine of the angle AOQ + E, when that angle is less than a right angle, or increased thereby when greater. For the third time take an angle H, that may be to the angle B as the sine of the angle AOQ + E + G to the radius; and an angle I to the angle N - AOQ - E - G + H, as the length L is to the same length L diminished by the cosine of the angle AOQ + E + G, when that angle is less than a right angle, or increased thereby when greater. And so we may proceed in infinitum. Lastly, take the angle AOq equal to the angle AOQ + E + G + I +, &c. and from its cosine Or and the ordinate pr, which is to its sine qr as the lesser axis of the ellipsis to the greater, we shall have p the correct place of the body. When the angle N - AOQ + D happens to be negative, the sign + of the angle E must be every where changed into -, and the sign - into +. And the same thing is to be understood of the signs of the angles G and I, when the angles N - AOQ - E + F, and N - AOQ - E - G + H come out negative. But the infinite series AOQ + E + G + I +, &c. converges so very fast, that it will be scarcely ever needful to proceed beyond the second term E. And the calculus is founded upon this Theorem, that the area APS is as the difference between the arc AQ and the right line let fall from the focus S perpendicularly upon the radius OQ. And by a calculus not unlike, the Problem is solved in the hyperbola. Let its centre be O, its vertex A, its focus S, and asymptote OK; and suppose the quantity of the area to be cut off is known, as being proportional to the time. Let that be A, and by conjecture suppose we know the position of a right line SP, that cuts off an area APS near the truth. Join OP, and from A and P to the asymptote draw AI, PK parallel to the other asymptote; and by the table of logarithms the area AIKP will be given, and equal thereto the area OPA, which subducted from the triangle OPS, will leave the area cut off APS. And by applying 2APS - SA, or 2A - SAPS, the double difference of the area A that was to be cut off, and the area APS that is cut off, to the line SN that is let fall from the focus S, perpendicular upon the tangent TP, we shall have the length of the chord PQ. Which chord PQ is to be inscribed between A and P, if the area APS that is cut off be greater than the area A that was to be cut off, but towards the contrary side of the point P, if otherwise: and the point Q will be the place of the body more accurately. And by repeating the computation the place may be found perpetually to greater and greater accuracy. And by such computations we have a general analytical resolution of the Problem. But the particular calculus that follows is better fitted for astronomical purposes. Supposing AO, OB, OD, to be the semi-axis of the ellipsis, and L its latus rectum, and D the difference betwixt the lesser semi-axis OD, and ½L the half of the latus rectum: let an angle Y be found, whose sine may be to the radius as the rectangle under that difference D, and AO + OD the half sum of the axes to the square of the greater axis AB. Find also an angle Z, whose sine may be to the radius as the double rectangle under the distance of the foci SH and that difference D to triple the square of half the greater semi-axis AO. Those angles being once found, the place of the body may be thus determined. Take the angle T proportional to the time in which the arc BP was described, or equal to what is called the mean motion; and an angle V the first equation of the mean motion to the angle Y, the greatest first equation, as the sine of double the angle T is to the radius; and an angle X, the second equation, to the angle Z, the second greatest equation, as the cube of the sine of the angle T is to the cube of the radius. Then take the angle BHP the mean motion equated equal to T + X + V, the sum of the angles T, V, X, if the angle T is less than a right angle; or equal to T + X - V, the difference of the same, if that angle T is greater than one and less than two right angles; and if HP meets the ellipsis in P, draw SP, and it will cut off the area BSP nearly proportional to the time. This practice seems to be expeditious enough, because the angles V and X, taken in second minutes, if you please, being very small, it will be sufficient to find two or three of their first figures. But it is likewise sufficiently accurate to answer to the theory of the planet's motions. For even in the orbit of Mars, where the greatest equation of the centre amounts to ten degrees, the error will scarcely exceed one second. But when the angle of the mean motion equated BHP is found, the angle of the true motion BSP, and the distance SP, are readily had by the known methods. And so far concerning the motion of bodies in curve lines. But it may also come to pass that a moving body shall ascend or descend in a right line; and I shall now go on to explain what belongs to such kind of motions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9522387385368347, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/132507/every-automorphism-of-mathbbrn-a-linear-mapping
# Every automorphism of $\mathbb{R}^n$ a linear mapping Is there an automorphism of $\mathbb{R}^n$ (here it is seen as a vector space) that is not a linear mapping? - The answer is "no", but the way to prove that depends on your definition of automorphism, $\mathbb{R}^n$ and "linear mapping". – akkkk Apr 16 '12 at 13:57 You need to explain this better. Do you mean that "every automorphism of the topological group $\mathbb{R}$ is a linear mapping"? An automorphism in the category of vector spaces and linear mappings is by definition a linear mapping. – Xabier Domínguez Apr 16 '12 at 13:57 @XabierDomínguez, I can't even understand what you said. My qustion was more in the sense of "The exponential function is an isomorphism between $(\mathbb{R},+)$ and $(\mathbb{R}^+,\times )$, but if I add the hypotheses that this isomorphism is an automorphism and the spaces are a vector space with usual sum, will those automorphisms be necessarily linear?". – Gustavo Marra Apr 16 '12 at 14:07 The exponential example was to show that not every isomorphism is linear. – Gustavo Marra Apr 16 '12 at 14:10 ## 2 Answers An automorphism of a vector space is, by definition, an invertible linear mapping. So no. - "...invertible linear mapping from the space to itself" perhaps? – Ragib Zaman Apr 16 '12 at 13:57 1 It's in the definition of automorphism that it is from the space into itself. – Gustavo Marra Apr 16 '12 at 14:08 This is an answer to the OP's question interpreted as "Is there an automorphism of the group $(\mathbb{R}^n,+)$ that is not $\mathbb{R}$-linear?" The answer is yes. Consider $\mathbb{R}$ as a vector space over $\mathbb{Q}.$ Note that as a $\mathbb{R}$-vector space, $\mathbb{R}$ has dimension 1, but as a $\mathbb{Q}$-vector space its dimension is infinite (actually it is the continuum). Let $\{e_i\}$ and $\{v_i\}$ be two different basis of $\mathbb{R}$ as a $\mathbb{Q}$-vector space. Define $f:\mathbb{R}\to \mathbb{R}$ as follows: $f(\sum r_i e_i)=\sum r_i v_i.$ This is an isomorphism of the vector space $\mathbb{R}$ (over $\mathbb{Q}$) onto itself because it maps one basis onto another. In particular it is an automorphism of the additive group $\mathbb{R}.$ But most of these maps are not linear over $\mathbb{R}.$ For example, you can choose $e_1=1,$ $e_2=\pi,$ and complete the basis $\{e_i\}$ from there (note that these two are independent over $\mathbb{Q}$). Now choose $v_1=f(e_1)=1,$ and as $v_2=f(e_2)$ pick any irrational number different from $\pi,$ and complete the basis $\{v_i\}$ from there. This $f$ cannot be of the form $x\mapsto \lambda x,$ i. e. it cannot be $\mathbb{R}$-linear. - Can you give me an example? – Gustavo Marra Apr 16 '12 at 14:13 I can't because these maps are not constructible. They depend on the Axiom of Choice. – Xabier Domínguez Apr 16 '12 at 14:14 Are you talking about finite-dimension vector spaces? I think I should have said that. – Gustavo Marra Apr 16 '12 at 14:15 I'll try to ellaborate my answer a bit further. – Xabier Domínguez Apr 16 '12 at 14:16 Ok, I'll make my question again: Consider $\mathbb{R}^n$ as the classical vector space over $\mathbb{R}$ (which was my obvious intent since the beggining). Then this must not hold. – Gustavo Marra Apr 16 '12 at 16:04 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352155923843384, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/30867/show-that-mathbbr2-is-not-homeomorphic-to-mathbbr2-setminus-0-0
# Show that $\mathbb{R}^2$ is not homeomorphic to $\mathbb{R}^2 \setminus\{(0,0)\}$ Show that $\mathbb{R}^2$ is not homeomorphic to $\mathbb{R}^2 \setminus \{(0,0)\}$. - 12 Consider the fundamental groups. It would certainly help to find the answer you're looking for if you gave a little hint at your background. – t.b. Apr 4 '11 at 10:53 ## 5 Answers Here's a slightly different way of looking at it that avoids fundamental groups (although has its own messy details to check). One of the spaces, upon removing a compact set, can be separated into two connected components with noncompact closure. The other can't. - Nice. It's not even very messy: about all you need is that subsets of $\mathbb{R}^n$ are compact iff closed and bounded. – Chris Eagle Apr 4 '11 at 14:21 3 – Pete L. Clark Apr 4 '11 at 14:35 To expand on Theo Buehler's comment: The fundamental group of a topological space $X$ at a point $x_0 \in X$ is the set of equivalence classes of loops at $x_0$ where two paths are equivalent iff they're homotopic. $\mathbb{R}^2$ and $\mathbb{R}^2 - \{ (0,0)\}$ are both path-connected so the fundamental group is independent of the point $x_0$ you pick. In $\mathbb{R}^2$, you see that any loop at any $x_0$ is homotopic to the constant map $x_0$. (the homotopy is easy to write down, if you can't do that I'll provide it in a second edit). This means that the fundamental group $\pi_1(\mathbb{R}^2) = \{ [c] \}$ where $c$ is the constant map which is the neutral element of the group, i.e. the fundamental group of $\mathbb{R}^2$ is trivial. In $\mathbb{R}^2 - \{ (0,0)\}$ on the other hand you can pick a loop around $(0, 0)$ and you will see that it is not homotopic to the constant map. Why? Because if you had a homotopy $h$ that contracts the loop to a point, the loop will go through $(0,0)$ at some point in time. But $(0,0)$ is not in $\mathbb{R}^2$, so $h$ is not a valid homotopy. Hope this helps. - 3 In order to actually prove that in the case of $\dot{\mathbb R}^2$ there is no such homotopy for the loop around $(0,0)$ one needs stronger means, e.g. the fact that the integral ${1\over 2\pi}\int_\gamma{dz\over z}\in{\mathbb Z}$ is invariant. – Christian Blatter Apr 4 '11 at 13:09 @Christian Blatter: Many thanks for pointing this out. I'm a beginner in this subject, so may I ask: why is it not enough to argue that any homotopy would go through $(0,0)$? – Matt N. Apr 4 '11 at 13:35 1 Intuitively it is clear that any homotopy "would go through $(0,0)$", but there are things between heaven and earth... That's where invariants come in: They furnish the irrefutable impossibility argument. – Christian Blatter Apr 4 '11 at 13:49 1 @Christian Blatter: I'm sorry but I think this doesn't answer my question satisfactorily. Could you give me a mathematically rigorous argument why it is not enough? To me arguing like I argued in my answer above is a proof by contradiction, so it seems rigorous to me. – Matt N. Apr 4 '11 at 14:06 5 @Matt: it doesn't really work that way: the burden of rigor is on you in giving your argument. For instance, the result that you claim for $\mathbb{R}^2$ is false for $\mathbb{R}^3$, but where in your argument did you call attention to some distinction between the two spaces that makes your argument work? – Pete L. Clark Apr 4 '11 at 14:19 show 4 more comments Let me give a solution to the problem that uses the idea of homotopy of loops but avoids that of the fundamental group or the co/homology groups. (This is actually a fleshing out of Christian Blatter's comment to Matt's answer. FWIW I thought of this independently a couple of hours ago but had to run off to proctor an exam for someone else's class.) We say that two loops $\gamma_0, \gamma_1: S^1 \rightarrow X$ are homotopic if there exists a continuous function $G: S^1 \times [0,1] \rightarrow X$ such that for all $x \in S^1$, $G(x,0) = \gamma_0(x)$ and $G(x,1) = \gamma_1(x)$. (Note I am not fixing a basepoint here: it doesn't matter either way for what I am about to say.) We say that a topological space $X$ is simply connected if it is connected and every loop in $X$ is homotopic to a constant loop. It is clear that if $X$ is simply connected and $Y$ is not, then $X$ cannot be homeomorphic to $Y$. I claim that $\mathbb{R}^2$ is simply connected and $\mathbb{R}^2 \setminus \{0\}$ is not. Step 1: We show directly from the definition that $\mathbb{R}^2$ is simply connected. Indeed, if $\gamma: S^1 \rightarrow \mathbb{R}^2$ is any loop, then the map $G: S^1 \times [0,1]$, $G(x,t) = (1-t)\gamma(x)$ is a homotopy from $\gamma$ to the constant loop based at $0$. Step 2: We show that $\mathbb{R}^2 \setminus \{0\}$ is not simply connected. For this we exploit the fact that $\mathbb{R}^2 \setminus \{0\}$ is an open subset in the complex plane and use complex analysis, specifically: Theorem (Homotopy Form of Cauchy's Integral Theorem): Let $\Omega$ be an open subset of the complex plane, let $f$ be a holomorphic function on $\Omega$ and let $\gamma_1,\gamma_2$ be two homotopic paths in $\Omega$. Then $\int_{\gamma_1} f(z) dz = \int_{\gamma_2} f(z) dz$. This is a nontrivial result, but it is a standard variant of the usual Cauchy Integral Formula that is found in most basic complex analysis texts. In particular, if $\Omega$ is simply connected, then every loop $\gamma: S^1 \rightarrow \Omega$ is homotopic to a constant loop and thus for every holomorphic function $f$ on $\Omega$ we have $\int_{\gamma} f(z)dz = 0$. In particular this applies to $\Omega = \mathbb{C}$ by Step 1. The endgame is probably familiar: on $\mathbb{C} \setminus \{0\}$, if you integrate the holomorphic function $f(z) = \frac{1}{z}$ on the path $\gamma(t) = e^{2 \pi i t}$ then you get $2 \pi i$, which is not zero. Thus $\mathbb{C} \setminus \{0\}$ is not simply connected and hence not homeomorphic to $\mathbb{C}$. - $\mathbb{R}^2\backslash\{(0,0)\}\cong S^1\times\mathbb{R}$. you can use co/homology or fundamental groups (if that's in your toolkit). you can note that one is contractible and the other is homotopy equivalent to a circle (and tell those apart). you can note that one has euler characteristic 0 and the other has euler characteristic 1 (if this is something you can calculate). - conceptually its because your mapping a continuous space to one with a hole in it. you cant have a continuous map from $\mathbb{R}^2$ to $\mathbb{R}^2\setminus (0,0)$ since to do so you'd have to tear a hole in it. - First of all, I'm not the one who voted this down, although I was tempted. Second, you probably mean homeomorphism where you say continuous map ($(x,y) \mapsto (e^{x}\cos{(y)},e^{x}\sin{(y)})$ is very continuous and even onto, isn't it?). Third, I'd rather say intuitively than conceptually but still, this would be more of a comment than an answer, I think. – t.b. Apr 4 '11 at 11:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528986811637878, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/170414-binomial-probability-questions.html
# Thread: 1. ## Binomial Probability Questions Hi, I have a couple questions concerning Binomial Probabilities that I have just not been able to figure out! I've been flipping through my statistics book and have found nothing that covers the problems in question. Any help from you math geniuses would be much appreciated. 1) It has been found that 20% of students who enroll in an introductory statistics class withdraw before the end of the semester. In a class of 15 students, what is the probability that: -Between 2 and 6 students withdraw -Less than 4 students withdraw 2) It is estimated that in 90% if criminal trials, the jury will reach the correct verdict. SUppose 100 cases are selected at random, what is the probability that: -At least 93 cases had the correct verdict? -Fewer than 85 cases had the correct verdict? Any help you guys could offer would really be appreciated. Thank you in advance! 2. Those problems are just routine if you have studied the corresponding theory. For example, for your first question: $P(2\leq \xi \leq 6)=\sum_{k=2}^{6}\binom{15}{k}(0,2)^k(0,8)^{15-k}$ Fernando Revilla
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.968768835067749, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/204699/whats-the-difficulty-in-finding-instantaneous-velocity
# What's the difficulty in finding instantaneous velocity? I'm reading Stewart's Essential Calculus: EXAMPLE 1 Suppose that a ball is dropped from the upper observation deck of the CN Tower in Toronto, 450 m above the ground. Find the velocity of the ball after 5 seconds. SOLUTION Through experiments carried out four centuries ago, Galileo discovered that the distance fallen by any freely falling body is proportional to the square of the time it has been falling. (This model for free fall neglects air resistance.) If the distance fallen after seconds is denoted by and measured in meters, then Galileo’s law is expressed by the equation $$s(t)= 4.9t^2$$ The difficulty in finding the velocity after 5 s is that we are dealing with a single instant of time $(t=5)$ , so no time interval is involved. However, we can approximate the desired quantity by computing the average velocity over the brief time interval of a tenth of a second from $t=5$ to $t=5.1$: What he meant with difficulty here? - ## 2 Answers He means that whereas it's easy to define the average velocity over a period (as displacement divided by time), it's much harder to define instantaneous velocity. So instead of thinking initially about an instantaneous velocity, he considers the average velocity over a very short period of time. - Average velocity has clear physical content: it is change in displacement divided by elapsed time. Instantaneous velocity is more of a theoretical construct: there is no clear way that such a thing could be measured. - Isn't instantaneous velocity just $v_{avg}=\frac{\Delta d}{\Delta t}=\frac{d_f-d_i}{t_f-t_i}$? – Gustavo Bandeira Sep 30 '12 at 2:51 4 That fraction only makes sense if $t_f\neq t_i$, but that means that you're not looking at an instant. – user22805 Sep 30 '12 at 2:56 It may be a very dumb question, but why can't we just $t$ for $5$ in $s(t)= 4.9t^2$? – Gustavo Bandeira Sep 30 '12 at 23:28 1 Here, $s(t)$ stands for displacement (that is, the distance fallen), not velocity or speed. – user22805 Sep 30 '12 at 23:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347698092460632, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/27302/classic-mass-predictions-from-left-right-models-with-discrete-symmetries?answertab=oldest
# Classic mass predictions from Left-Right models with discrete symmetries? I am covering the classic literature on predictions of Cabibbo angle or other relationships in the mass matrix. As you may remember, this research was a rage in the late seventies, after noticing that $\tan^2 \theta_c \approx m_d/m_s$. A typical paper of that age was Wilczek and Zee Phys Lett 70B, p 418-420. The technique was to use a $SU(2)_L \times SU(2)_R \times \dots$ model and set some discrete symmetry in the Right multiplets. Most papers got to predict the $\theta_c$ and some models with three generations or more (remember the third generation was a new insight in the mid-late seventies) were able to producte additional phases in relationship with the masses. Now, what I am interested is on papers and models including also some prediction of mass relationships, alone, or cases where $\theta_c$ is fixed by the model and then some mass relationship follows. A typical case here is Harari-Haut-Weyers (spires) It puts a symmetry structure such that the masses or up, down and strange are fixed to: $m_u=0, {m_d\over m_s} = {2- \sqrt 3 \over 2 + \sqrt 3}$ Of course in such case $\theta_c$ is fixed to 15 degrees. But also $m_u=0$, which is an extra prediction even if the fixing of Cabibbo angle were ad-hoc. Ok, so my question is, are there other models in this theme containing predictions for quark masses? Or was Harari et al. an exception until the arrival of Koide models? - Also, published criticisms of these papers are welcome. I am aware of some for Harari et al. – user135 Nov 25 '11 at 0:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9615687131881714, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/47569/what-makes-four-dimensions-special/51331
What makes four dimensions special? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Do you know properties which distinguish four-dimensional spaces among the others? 1. What makes four-dimensional topological manifolds special? 2. What makes four-dimensional differentiable manifolds special? 3. What makes four-dimensional Lorentzian manifolds special? 4. What makes four-dimensional Riemannian manifolds special? 5. other contexts in which four dimensions or $3+1$ dimensions play a distinguishing role. If you feel there are many particularities, please list the most interesting from your personal viewpoint. They may be concerned with why spacetime has four dimensions, but they should not be limited to this. - 2 Can you give some motivation? This question sounds rather arbitrary. You could equally well ask about what makes 1,2,3 or any other number of dimensions special. – Alex Bartel Nov 28 2010 at 8:56 6 Motivation: space-time! :) – Eivind Dahl Nov 28 2010 at 8:59 2 see similar question mathoverflow.net/questions/7921/… – Nikita Kalinin Nov 28 2010 at 9:02 11 The differential geometry reason seems to be connected to the fact that $2=4/2$; $2$-forms are also mid-dimensional forms. This allows for things like (anti-)selfdual Yang-Mills equation. – Torsten Ekedahl Nov 28 2010 at 10:51 2 This question seems like a special case of this earlier question mathoverflow.net/questions/5372/dimension-leaps – jc Nov 28 2010 at 20:12 show 9 more comments 12 Answers The Whitney trick is an important step in Smale's proof of the Poincaré conjecture for smooth manifolds of dimension $n\geqslant 5$. It turns out however that such a trick does not work in dimension 4. However, as shown by Freedman (using previous work by Casson), it is possible (in a non-trivial way) to make this trick work for topological 4-manifolds. This partially explains the striking difference between topological and smooth manifolds in dimension 4. As an example of this striking and exceptional difference between these two categories, we know that in every dimension $n\neq 4$ a topological closed manifold may admit only finitely many smooth structures. In dimension $n=4$ however there are 4-manifolds like the $K3$ surface or (very recently) $S^2\times S^2$ that admit infinitely many distinct smooth structures. As far as we know, it might as well be that any closed smoothable 4-manifold has infinitely many distinct structures! The question is open for instance for $S^4$ itself, which might have any number of distinct differentiable structures ranging from 1 to $\infty$ (extremes included). That's why we say that the Poincaré Conjecture is true for topological 4-manifolds but is still (very) open for smooth 4-manifolds. - Aren't all K3 surfaces diffeomorphic, and thus have the same smooth structure? – Gunnar Magnusson Nov 28 2010 at 21:32 8 Yes, they are all diffeomorphic, that's why topologists say "the" K3-surface. However, there are (infinitely many) smooth structures on this manifold: these smooth structures do not arise from a complex structure, they are constructed with other cut-and-paste techniques (see Fintushel and Stern in 1996, arxiv.org/abs/dg-ga/9612014) and are distinguished using Seiberg-Witten invariants. – Bruno Martelli Nov 28 2010 at 22:31 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. (Riemannian geometry) Four is the only dimension $n$ in which the adjoint representation of SO($n$) is not irreducible. Since the adjoint representation is isomorphic to the representation on 2-forms, this means that the bundle of 2-forms on an oriented Riemannian manifold decomposes into self-dual and anti-self-dual forms. 2-forms are particularly significant, since the curvature of a connection is a 2-form. In particular the curvature of the Levi-Civita connection is a 2-form with values in the adjoint bundle, so it has a 4-way decomposition into self-dual and anti-self-dual pieces. Hence there are natural curvature conditions on Riemannian 4-manifolds which have no analogue in other dimensions (without imposing additional structure). The impact of self-duality includes: special properties of Einstein metrics, Yang-Mills connections, and twistor theory for (anti-)self-dual Riemannian manifolds. - 1 Note also Torsten Ekedahl's response to the question above (which I missed when posting this): in any even dimension, middle dimensional forms are not irreducible for the complexified special orthogonal group. This accounts not only for the special features of four dimensions in Riemannian geometry, but also dimensions 2 and 6, where 1-forms and 3-forms play a special role. Further, Lorentzian geometry in four dimensions is special because the bundle of 2-forms has a natural complex structure: this underpins the Petrov Classification of spacetimes, for example. – David MJC Nov 28 2010 at 23:16 The Yang-Mills functional $\int_{{\bf R}^{1+d}} F^{\mu \nu} F_{\mu \nu}\ dx dt$ is dimensionless (scale-invariant) if and only if the spacetime dimension is four. (The integrand is a quadratic function of the curvature, which is two derivatives of the metric: 2 times 2 is equal to 4. In contrast, the Dirichlet functional, which involves a quadratic function of single derivatives rather than double derivatives, becomes critical at two dimensions rather than four, which explains why harmonic functions behave particularly nicely in two spatial dimensions. Similarly, the Einstein-Hilbert functional involves a linear function of curvature, and is thus also critical at two dimensions, explaining the nice behaviour of Ricci flow and similar equations in two dimensions.) For similar reasons, the Yang-Mills energy $\int_{{\bf R}^d} T_{00}\ dx$ is dimensionless if and only if the spatial dimension is four. As such, four spatial dimensions is "critical" for the Yang-Mills equation in the sense that for a fixed energy, one gets more or less the same nonlinear behaviour at both coarse and fine scales. This is also related to why Yang-Mills instantons only emerge at spatial dimensions four or higher; below this dimension, (elliptic) Yang-Mills connections are always smooth. (In general, the singularities of such connections are known to have codimension at least four, a classic result of Uhlenbeck.) - 2 Terry, I agree with all of this, but it raises a question I've had for a long time, which is why should the Ricci flow work so well for extracting the topological structure of a 3-manifold? I confess that I have not studied Perelman's work as closely as I should, but I was wondering if there is a high level explanation like the ones you give above for why one would expect in advance that Perelman's ideas would work. I myself conjectured a long time ago a completely different approach that did involve using the scale-invariant $L_p$ norm of curvature. That obviously didn't work out as well – Deane Yang Jan 6 2011 at 19:23 3 Good question! Certainly Ricci flow is much easier to study in two dimensions than in any other dimension, so it was a real feat of Perelman to push the three-dimensional theory as much as he did. Much of Perelman's work is in fact valid in all dimensions, though at a few points he uses (much as Hamilton did before him) the fact that Ricci curvature controls Riemann curvature, which is only true up to three dimensions. But more importantly, perhaps, in three dimensions the only singularities look like S^3, S^2 x R, or quotients thereof, and are thus amenable to surgery. (cont) – Terry Tao Jan 6 2011 at 23:41 1 In four dimensions, singularities with the local structure of S^2 x R^2 begin to appear, and this is bad because it is not clear at all how one could excise them by surgery. Short of finding a non-surgical approach to dealing with Ricci flow singularities, it's not clear to me at all how to push the Ricci flow strategy beyond three dimensions, for instance to tackle the smooth 4D Poincare conjecture. (The situation may be better for 4D Ricci-Kahler flow, though.) – Terry Tao Jan 6 2011 at 23:43 3 I should add that one of the key innovations of Perelman's approach was to not work with quantities such as the Einstein-Hilbert functional (which were best suited to two spatial dimensions), but instead to find new scale-invariant quantities (specifically, Perelman entropy and Perelman reduced volume) which had good monotonicity behaviour wrt Ricci flow in all dimensions. Once he did this, there was no longer anything particularly special about two dimensions, and in principle one could now proceed in any dimension. (And indeed, his noncollapsing theorem holds in all dimensions.) – Terry Tao Jan 6 2011 at 23:47 2 Hmm. I suppose this is possible, though in practice the singularity could have a more complicated 2-surface than D^2 in it (S^2 x R^2 is only the local structure of the singularity, not the global structure). The difficulties may be on the geometric side: the surgery has to respect the geometry enough that the delicate monotonicity formulae that underlie the noncollapsing theorem are not destroyed. In the 3D case the singularities are simple enough that one can locate "horns" where the singularities look geometrically like the cylinder S^2 x R^1 which is crucial for surgery. – Terry Tao Jan 7 2011 at 22:19 show 2 more comments It's the only dimension in which the smooth Poincare conjecture is still open. It's the only dimension in which $\mathbb R^n$ has a nonstandard smooth structure. (In fact uncountably many of them.) There's a lot going on in four dimensions. In some sense it's right at the boundary between low and high-dimensional topology. - Thanks! Indeed, one of the particularities come from Donaldson's results (then Seiberg and Witten). – Cristi Stoica Nov 28 2010 at 9:24 +1 for the last sentence. I would also mention the h-cobordism theorem, which is alluded to elsewhere (the Whitney trick) as a (perhaps the?) key manifestation of this. – Steve Huntsman Nov 28 2010 at 14:46 And it is the only dimension that a closed smoothable manifold (possibly all of them) can have infinite smoothe structures. – Xiaolei Wu Nov 28 2010 at 21:22 A comment is that 4 is the first dimension for which every finitely presented group may be realized as the fundamental group of a closed smooth 4-manifold. Other special properties are that the first pontryagin class and the Kirby-Siebenmann invariant live in 4-dimensional cohomology. - Four is the dimension where a maximum number of regular polyhedra exist. (Apart from polygons in the plane, of course. But those are "abelian", hence boring :) ) - In his new book "The shape of inner space" (2010) fields medalist Shing-Tung Yau cites Simon Kirwan Donaldson from Imperial College London (p. 68): No one yet knows, from a fundamental standpoint, exactly what makes four dimensions so special, Donaldson admits. Prior to his work, we knew virtually nothing about “smooth equivalence” (diffeomorphism) in four dimensions, although the mathematician Michael Freeman (formerly at the University of California, San Diego) had provided insights on topological equivalence (homeomorphism). In fact, Freeman topologically classified all four-dimensional manifolds, building on the prior work of Andrew Casson (now at Yale). Donaldson provided fresh insights that could be applied to the very difficult problem of classifying smooth (diffeomorphic) four-dimensional manifolds, thereby opening a door that had previously been closed. Before his efforts, these manifolds were almost totally impenetrable. And though the mysteries largely remain, at least we now know where to start. - Four is the maximum number of dimensions for which the Busemann-Petty problem has an affirmative answer. This problem is discussed in my answer to another question. Pinning down why there is a shift between four and five dimensions is an interesting question and is probably not completely understood. Some explanations for the shift can be given but the ones I know look more like analytic artifacts and don't seem to give any profound geometric insight. To give a taste, let me mention an example of an analytic result which is closely related to the switch. First some preliminary explanation. To every origin-symmetric convex body $K$ in $\mathbb{R}^n$ there is an associated norm $||\cdot||_K$ on $\mathbb{R}^n$ whose unit ball is precisely $K$. If we restrict the norm to the unit sphere $S^{n-1}$ and take its reciprocal, we obtain the naturally defined radial function $\rho_K$ of $K$. In fact, any continuous, even, positive function on the sphere will be the radial function of some (not necessarily convex) origin-symmetric star body. Next, given an origin-symmetric star body $L$, we can define its so-called intersection body $I L$ to be the origin-symmetric star body whose radial function is given by $$\rho_{I L}(\xi)=Vol_{n-1}(L \cap \xi^\perp),\quad \xi\in S^{n-1}.$$ Now, there is a simple (almost tautological) formula for the volumes of central sections of a body in terms of the spherical Radon transform of its radial function: $$Vol_{n-1}(K\cap \xi^\perp) = \frac{1}{n-1}R(||\cdot||_K^{-n+1})(\xi),\quad \xi\in S^{n-1}.$$ Using this formula we have that $$\rho_{I L}(\xi)=\frac{1}{n-1} R \rho_L^{n-1}(\xi),\quad \xi\in S^{n-1}.$$ It turns out that the Busemann-Petty problem is equivalent to the question of whether every origin-symmetric convex body $K$ in $\mathbb{R}^n$ is the intersection body of some star body. By the above remarks it is not hard to be convinced that this question is then related to the positivity of the inverse spherical Radon transform. Now, by utilizing a connection between Fourier analysis and the spherical Radon transform, we get that $$Vol_{n-1}(K\cap \xi^\perp) = \frac{1}{\pi(n-1)}(||\cdot||_K^{-n+1})^\wedge(\xi),\quad \xi\in S^{n-1}.$$ Here the function $||\cdot||_K^{-n+1}$ is locally integrable and the Fourier transform is taken in the sense of distributions. Thus, if $K$ is an intersection body of a star body $L$ then $$||\xi||_K^{-1} = \frac{1}{\pi(n-1)}(||\cdot||_L^{-n+1})^\wedge(\xi)$$ and a small argument using this formula shows that $(||\cdot||_K^{-1})^\wedge$ is a positive distribution, and hence that $||\cdot||_K^{-1}$ is a positive definite distribution. The converse also holds and we have the general result that a star body $K$ in $\mathbb{R}^n$ is an intersection body iff $||\cdot||_K^{-1}$ represents a positive definite distribution in $\mathbb{R}^n$. In summary, the very geometric Busemann-Petty problem is closely related to the positive definiteness of certain distributions (coming from certain negative powers of norms on $\mathbb{R}^n$). With this vague background, perhaps we can appreciate that the following analytic result is closely related to the switch between 4 to 5 dimensions in the problem: Theorem. Let $n \ge 3$ be an integer and $2 < q \le \infty$ a real number. Then the distribution $||\cdot||_q^{-p}$ is positive definite if $p\in (0,n-3)$ and is not positive definite if $p\in [n-3,n)$. As a consequence, the unit ball of the space $\ell_q^n$ is an intersection body iff $n \le 4$. - $4=11-7$ and $11$ is the maximal dimension for supersymmetry with spins $\le 2$ while $7$ is the first dimension in which there exist compact manifolds of exceptional holonomy. - WHy is that difference significant? – Mariano Suárez-Alvarez Jan 7 2011 at 0:47 1 In M-theory spacetime is 11-dimensional, so the question of why 4 dimensions can instead be viewed as the question of why 7 of the dimensions are so small as to be unobserveable at present energies. String theorists certainly don't know the answer to this question, but one fashionable approach involves choosing 7 of the dimensions to be those of a compact manifold with $G_2$ holonomy. – Jeff Harvey Jan 7 2011 at 1:25 Four is the dimension of the oriented Riemannian manifolds for which we can think of gwistor space. Yes, gwistor space. - Also related to polytopes, there is Richter-Gebert's universality theorem for 4-polytopes (which i quote from Ziegler's book): Every elementary semialgebraic set defined over $\mathbb{Z}$ is stably equivalent to the realization space of some 4-dimensional polytope. - Here is more about regular polytopes and four dimensions. For regular convex polytopes there are six regular polytopes and then for all dimensions higher than 4 there are three. For nonconvex regular polytopes there are 10 in four dimensions and zero in all higher dimensions. For convex euclidean tessellations there are 3 in four dimensions zero in all higher dimensions. However there are some cases in which the 5th and 6th dimension have different values: For convex hyperbolic tessellations 4 in the fourth dimension, 5 in the fifth and zero in all higher dimensions. For nonconvex hyperbolic tessellations there are zero in four dimensions four in the fifth dimension and zero in all higher dimensions. I used the wikipedia article "List of regular polytopes" which is available here as a source for the above information. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346950054168701, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/50027/is-it-possible-to-write-a-density-matrix-in-the-following-form
# Is it possible to write a Density Matrix in the following form? Is it possible to write an arbitrary density matrix $\hat{\rho}$ in the following form ? $$\hat{\rho} ~=~ \frac{1}{N} \sum_{\ell=1}^N \left|x_{\ell}\right\rangle \left\langle x_{\ell}\right|,$$ where $\left\{\left|x_{\ell}\right\rangle\right\}_{\ell = 1}^{N}$ are normalized states (but not necessarily orthogonal). If yes, how can one prove this ? - I'm not answering right away because right now I can't think of a way to mathematically prove it (and I'm a lil' bit busy ATM), but I'm sure that to construct a density matrix you don't need the states to be orthogonal to each other. – user17581 Jan 12 at 14:48 yeah, that's right but the coefficients of $|x_{\ell}>$ here are all the same and we have factorized them out as $\frac{1}{N}$, besides $|x_{\ell}>$ are normal states! how can it be possible ? – physics_xyz Jan 12 at 14:56 It's just the diagonalization of the density matrix, a Hermitian matrix, isn't it? $N$ must be chosen to be nothing else than the dimension of the matrix for generic ones, otherwise the $x$-vectors wouldn't be orthogonal to each other. – Luboš Motl Jan 12 at 15:17 1 I don't think so, because here $\left\{|x_{\ell}>\right\}_{\ell = 1}^{N}$ don't form a basis for space they are just an ensemble. – physics_xyz Jan 12 at 15:46 1 You said that the $|x_l\rangle$ are not necessarily orthonormal, but do they span the (presumably finite dimensional) space? If not, then you can't construct an arbitrary density matrix in this way. – twistor59 Jan 13 at 8:31 show 1 more comment ## 2 Answers Let us reformulate OP's question (v3) as follows: Let $H$ be an $N$-dimensional Hilbert space. Is it possible to write an arbitrary density operator $$\tag{1} \hat{\rho}~\in~ B(H)~\cong~ {\rm Mat}_{N\times N}(\mathbb{C})$$ on the form $$\tag{2} \hat{\rho} ~=~ \frac{1}{N} \sum_{m=1}^N |m) (m|,$$ where $\left\{|m) \right\}_{m = 1}^{N}$ are normalized states $$\tag{3}(m|m) ~=~1,$$ but not necessarily orthogonal? The answer is Yes. Proof: Because $\hat{\rho}$ is a positive operator, it may be diagonalized wrt. an orthonormal basis. Hence there exists an orthonormal basis $\left\{|n\rangle \right\}_{n = 1}^{N}$, and eigenvalues $\lambda_1, \ldots, \lambda_N \geq 0$, such that $$\tag{4} \hat{\rho} ~=~ \sum_{n=1}^N \lambda_n|n\rangle \langle n|,$$ and with unit trace $$\tag{5} \sum_{n=1}^N \lambda_n~=~ {\rm tr} \hat{\rho}~=~1.$$ Now define $$\tag{6} |m)~:=~ \sum_{n=1}^N \exp\left(\frac{2\pi i}{N} mn \right) \sqrt{\lambda_n} |n\rangle .$$ It is straightforward to check that eqs. (2) and (3) are satisfied. - My answer: not every density matrix is allowed to be written in this way. However, if you admit non normalized states, in finite dimensional Hilbert spaces, each density matrix can be expressed in this way and, moreover, there is an infinite number of decompositions with that form. (See for example the book Nielsen & Chuang "Quantum Computation and Quantum Information" ) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9102964997291565, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/207826/algorithm-to-find-all-vertices-exactly-k-steps-away-in-an-undirected-graph
# Algorithm to find all vertices exactly $k$ steps away in an undirected graph This question may be better served at cs.SE, but I am not very familiar with CS lingo, so I'm hoping the maths community would be able to answer it as well... I have an undirected graph, and I am interested in finding all vertices in the graph exactly $k$ steps away from a given vertex. In other words, I want to compute the set of all vertices such that the closest path is $k$ steps away. Is there such an algorithm, or is there an obvious modification to a different algorithm? I'm sure that something like Djikstra's algorithm can solve it brute force by computing the shortest path to each vertex, but I'm hoping that there is something more cleverer that doesn't resort to brute-forciness. - 1 Breadth-first search? – Long Oct 5 '12 at 15:59 @Long It didn't take you Long to come up with that. – Graphth Oct 5 '12 at 16:40 ## 3 Answers Another algorithm... let $A$ be the adjacency matrix of the graph. Consider successive powers of $A$, $A, A^2, A^3, \ldots A^k$. A vertex $u$ is distance $k$ from a vertex $v$ if and only if the $u, v$ entry of $A, A^2, \ldots, A^{k-1}$ are all $0$ and the $u, v$ entry of $A^k$ is not zero. In general, the $uv$ entry of $A^j$ is the number of walks from $u$ to $v$ of length $j$. - !! This is exactly what I need! – Arkamis Oct 5 '12 at 16:09 @EdGorcenski Why do you need this, by the way? For small graphs, this is probably instantaneous. For larger graphs, you'd probably go with Henning's answer. This one has the advantage that it's very simple and easy to program. But, if you wanted such a thing, maybe it's already in Sage... hmm. Yes, it looks like in fact it is called "breadth_first_search". My point here is just that a lot of graph theory stuff is already programmed if you wanted to use that instead of reinventing the wheel. – Graphth Oct 5 '12 at 16:11 I have a map of a virtual city, and a user is instructed to navigate the city using certain contextual clues, placed at intersections. The user needs to make a certain number of decisions (tests), and so I need to be able to find a subset of intersections at which I can position objectives that ensure that the user encounters a pre-specified number of tests. – Arkamis Oct 5 '12 at 16:15 I'm sure there's stuff out there; for now, I'm doing the analysis in MATLAB, and I can probably find code that does exactly what I need. My reason for posting the question is that I wasn't even sure what I needed to look for at first :) – Arkamis Oct 5 '12 at 16:18 A bit of trivia: This is the solution to the famous math problem in the film "Good Will Hunting". – axblount Oct 5 '12 at 16:45 show 3 more comments A standard breadth-first traversal of the graph will visit the vertices in order of increasing distance from the starting node. Furthermore, each node will get added to the work list while you're looking at the next-to-last node in a shortest path to the target node -- so you can store the distance to each node in the order they get added to the worklist. Once you see a node with distances $k+1$, you can stop everything. This search is linear in the number of edges within $k$ of the starting node. - Excellent, exactly what I need to get started. Thanks! – Arkamis Oct 5 '12 at 16:08 Call the vertex $v$. $d(v, v) = 0$ The vertices, $u$, that satisfy $d(u, v) = 1$, are exactly the neighbors of $v$. The vertices, $w$, that satisfy $d(w, v) = 2$, are exactly the neighbors of neighbors of $v$ that have not yet been used. And so on. Keep going until you get to distance $k$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553024172782898, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/130042-different-ways-counting.html
# Thread: 1. ## Different ways of Counting .... (a) Let n be a nonnegative integer. Use the identity $(1 + x)^n(1 + x)^n = (1 + x)$^ $(2n)$ to show that $\sum_{n=0}^{n}\binom{n}{k}^2 =\binom{2n}{n}$ (1) (b) Prove (1) again by counting, in two different ways, the number of ways of choosing n people from a set of n girls and n boys. 2. Originally Posted by shmounal (a) Let n be a nonnegative integer. Use the identity $(1 + x)^n(1 + x)^n = (1 + x)$^ $(2n)$ to show that $\sum_{n=0}^{n}\binom{n}{k}^2 =\binom{2n}{n}$ (1) Compare the coefficient of $x^2$ in both expressions above. Tonio (b) Prove (1) again by counting, in two different ways, the number of ways of choosing n people from a set of n girls and n boys. .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8842942118644714, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/13165/oscillations-of-blocks-connected-by-a-spring/13180
# oscillations of blocks connected by a spring Imagine two blocks of masses m1 and m2 joined together by a spring of spring constant K. Now let the spring be stretched by a distance X and then the system is released. suppose during the stretching the block of mass m1 moves towards left by a distance A and the block of mass m2 by a distance B. Now the center of mass of the system at any instant will remain at rest. Now let it be present at a point P in between the two blocks. so can I depict the oscillations of the two blocks as two independent oscillations about the center of masss? I appeal for an answer with strong reasons. - ## 2 Answers The two oscillations will not be independent as they will share frequency and phase. You can start from Newton's equations of motion. $$m_1 \frac{{\rm d}^2\,a(t)}{{\rm d}t^2} = -F(t)$$ $$\mbox{-}m_2 \frac{{\rm d}^2\,b(t)}{{\rm d}t^2} = F(t)$$ $$F(t) = k \;\left( a(t)+b(t) \right)$$ Assume simple harmonic motion $a(t) = A \sin(\omega\,t)$, $b(t) = B \sin(\omega\,t)$ which leads to the frequency equation $$k\,\left(m_1+m_2\right) = m_1 m_2 \omega^2$$ and the amplitude equation $$A\, m_1 = B\, m_2$$ So now you can show that the center of gravity $P$ does not move as long as the amplitudes $A$ and $B$ obey the balance equation above. How? Well what is the equation for the center of gravity? $${\rm cg} = \frac{\mbox{-}B\,m_2+A\,m_1}{m_1+m_2} = 0$$ - Did you make that image just for this problem? – AlanSE Aug 4 '11 at 14:24 yes - and it is pretty cruddy too. In fact, 80% of solving a problem is framing the problem and making a nice sketch of all relevant information. – ja72 Aug 4 '11 at 16:08 I found it to be rather impressive. This is a high quality answer overall IMO. – AlanSE Aug 4 '11 at 16:21 Well Thanks to all for your responses. – Primeczar Aug 5 '11 at 16:23 Traditionally, the "mass on a spring" is analyzed with one end of the spring held fixed. Your problem is equivalent to two such systems back-to-back. You have identified the center of gravity as a fixed point; so you can literally fix it and then cut the problem in two. Each half of the system then behaves like a traditional "mass-on-a-spring". -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518040418624878, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3773596
Physics Forums Page 1 of 2 1 2 > ## Is the Higgs boson already discovered? Alberto Palma's recent paper Arxiv:1202.0217 says in its conclusions part: "The ATLAS collaboration presents first results of the direct search for the SM Higgs boson decaying to $b\bar b$. No evidence of the Higgs boson was found in a $pp$ collision data sample of $\mathcal L=1.04\ \mathrm f\mathrm b^{-1}$ at $\sqrt{s}=7\ \mathrm T\mathrm e\mathrm V$. Instead, upper limits on the Higgs boson production cross section of between 10 and 20 times the SM value were determined, in a mass range $110<m_H<130\ \mathrm G\mathrm e\mathrm V$". Does it mean that the Higgs boson is already discovered or not yet? What is the exact meaning of these words? What are the most recent news in this area? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Gold Member My understanding is that the folks at CERN think they have probably discovered the Higgs but the results are not yet conclusive and any announcement would be premature at this point. What they definitely HAVE done is refine the range of possible values for the mass for the Higgs if it does exist. Recognitions: Science Advisor The current trend is we think the Higgs is living somewhere at ~126 GeV, see: http://arxiv.org/abs/1202.1408 But the statistical significance of this is low enough that we shouldn't (and cannot) declare it to be a true detection. The paper you cite is looking at an (I believe) more difficult decay channel, and is using less data. Plus, it looks like they've only set a bound at ~2sigma. ## Is the Higgs boson already discovered? Quote by Ruslan_Sharipov Does it mean that the Higgs boson is already discovered or not yet? What is the exact meaning of these words? What are the most recent news in this area? The important sentence in your quote is "No evidence of the Higgs boson was found." They looked, but they didn't see it. However they weren't able to rule out its existence if it has a mass between 110 and 130 GeV. If the Higgs boson does exist and has a mass in this range, they expected not to see it. Note that this is only reporting the result of one particular method of searching for the Higgs. Other searches, which look at other possible decay products, have produced hints (not proof) that the Higgs exists and has a mass around 125 GeV. Hi, without invoking higgs, the electroweak is said to be non-renormalizable because one has to introduce mass directly to it. Can anyone please explain briefly why introducing mass directly to the equation would make it non-renormalizable? Thanks. Concerning renormalizability, I have heard the following proposition: "A field-theoretic model is renormalizable if and only if its Lagrangian is of the degree not higher than 4 with respect to the fields involved". I don't know its proof and I would like to ask someone more competent than me here: is this proposition true or not? The main point is that in order to have a consistent theory with gauge bosons (or generally, vector bosons) , you need to have a symmetry which is called gauge invariance. This symmetry doen't exist if you introduce masses to the gauge bosons. Why do you need this symmetry? An inspection leads to that a massless vector boson (spin 1 particle) has two degrees of freedom (for exmaple ,a photon has two polarizations) and a massive has three degrees of freedom. However, to introduce a vector boson to your theory in a lorentz invariant way, you have to use an object which is a lorentz vector A_{μ}, which has 4 degrees of freedom. Therefore, some of them are not physical and the gauge invariance makes sure they don't contribute to physical observables. If you would introduce masses directly to the vector bosons , these degrees of freedom would behave badly at high energies, producing results which make no sense( probabilities not summing into one...) New degrees of freedom are required to cancel this bad behaviour( in the SM case, its the higgs), which are exaclty the degrees of freedom which restore the gauge invariance. In that sense, introducing a mass directly results in a thoery which makes sense only up to an energy scale which these new degrees of freedom are required. similar to a standard non renormalizable theory which is a valid only up to a certain cut off scale. For the standard model without the higgs, this cut off scale is about 800GeV, meaning some new degrees of freedom have to exist up to that scale( the higgs or something else) Hope that helps The original papers which discovered that massive gauge bosons lead to nonrenormalizable theories are listed in reference 13 of http://arxiv.org/abs/hep-ph/0401010. Studying their arguments might be a good way to learn about renormalization technicalities, because there are many subtle details. For example, you can have a massive abelian gauge boson; it's massive nonabelian gauge bosons which lead to a nonrenormalizable theory, because they contain some extra interactions (and thus extra divergences) not present in the abelian case. Also, as often happens in QFT, it seems there is no absolute proof of nonrenormalizability - as late as Veltman's paper in 1968, he's emphasizing that renormalizing the theory will be difficult but maybe not impossible - it seems that everyone gave up only after Boulware 1970. And 't Hooft's construction of renormalizable theories where the gauge bosons get a mass through the Higgs mechanism came just a year or two later. Also let's remember that nonrenormalizable theories are not useless or evil, it's just that renormalizability is good because it means the theory can be extrapolated to high energies. As Weinberg mentions, Fermi's original theory of the weak interaction was nonrenormalizable. But it still works within its range of validity. Recognitions: Gold Member Science Advisor Recent reports [http://arxiv.org/abs/1202.1408] suggest an overabundance of events around 126 Gev with 3.5 sigma [roughly 99.9%] probability. The probability of this being due to random background noise over a range of energies is estimated at 1.4% [2.2 sigma]. The next LHC run may tell the tale. Particle physicists prefer a 5 sigma confidence level before detection is deemed confirmed. You mean that if I take an abelian gauge theory ( for example,QED ) and add an explicit mass term which breaks gauge invariance, the theory will have no problems with unitarity? At these , up to the scale where you hit a landau pole? Wow. The higgs is almost found. Since the LHC hasn't found any Superpartners and no hidden dimensions nor blackholes. I guess the only thing the LHC will ever find is the Higgs before it shuts down in a few years. Mentor What a load of ill-informed nonsense! First, there is no such thing as "almost found". Either its found or it's not The experiments have presented their data. Either you are convinced that they have found it, or you are not. The experiments themselves are not convinced. Second, the idea that because no new physics has been seen with 1% of the total data at half the design energy that no new physics will ever be see is utterly ridiculous. Finally, "a few years" is more like 20. Quote by Vanadium 50 What a load of ill-informed nonsense! First, there is no such thing as "almost found". Either its found or it's not The experiments have presented their data. Either you are convinced that they have found it, or you are not. The experiments themselves are not convinced. Second, the idea that because no new physics has been seen with 1% of the total data at half the design energy that no new physics will ever be see is utterly ridiculous. Finally, "a few years" is more like 20. Oh, so there is still chance the supersymmetric partners that will solve the Hierarchy Problem will be found. Good. I thought they were given up already as not one of them was seen. Thanks for the heads up. Quote by ofirg You mean that if I take an abelian gauge theory ( for example,QED ) and add an explicit mass term which breaks gauge invariance, the theory will have no problems with unitarity? At these , up to the scale where you hit a landau pole? Yes, it's true - just look up unitarity and renormalizability of Proca theory. Thanks. If I understand correctly, massive QED (proca theory) is just a particular gauge of a gauge invariant theory (the stukerberg model), where the scalar field can be removed from the theory. basically, in order to render massive qed gauge invariant, one only needs to add non physical (gauge redundant) degrees of freedom, in contrary to the non abelian case, where at least one physical degree of freedom is needed (higgs particle). I this also true for an abelian gauge field which couples chirally to fermions, like hypercharge? Quote by mitchell porter .....For example, you can have a massive abelian gauge boson; it's massive nonabelian gauge bosons which lead to a nonrenormalizable theory, because they contain some extra interactions (and thus extra divergences) not present in the abelian case..... Massive QED is renormalizable? Can you give some reference for that? Quote by mitchell porter Yes, it's true - just look up unitarity and renormalizability of Proca theory. I looked it up and it is indeed true. The troublesome kk term droups out in any Feynman diagram due to U(1) gauge invariance. Page 1 of 2 1 2 > Thread Tools Similar Threads for: Is the Higgs boson already discovered? Thread Forum Replies Beyond the Standard Model 3 Beyond the Standard Model 14 High Energy, Nuclear, Particle Physics 0 General Discussion 12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945353090763092, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/146831/approximation-for-pi?answertab=oldest
# Approximation for $\pi$ I just stumbled upon $$\pi \approx \sqrt{ \frac{9}{5} } + \frac{9}{5} = 3.141640786$$ which is $\delta = 0.0000481330$ different from $\pi$. Although this is a rather crude approximation I wonder if it has been every used in past times (historically). Note that the above might also be related to the golden ratio $\Phi = \frac{\sqrt 5 + 1}{2}$ somehow (the $\sqrt5$ is common in both). $$\Phi = \frac{5}{6} \left( \sqrt{ \frac{9}{5} } + \frac{9}{5} \right) - 1$$ or $$\Phi \approx \frac{5}{6} \pi - 1$$ I would like to know if someone (known) has used this, or something similar, in their work. Is it at all familiar to any of you? - – Kirthi Raman May 18 '12 at 21:01 @Artin this first link does not apply because it is an identity formula and not an approximation, and the second link is already included in the original posting. – ja72 May 18 '12 at 21:03 What are you expecting, validation of this approximation and some sort of variation of this approximation? – Kirthi Raman May 18 '12 at 21:16 I look for someone to say, I have seen this and it was used by x, or this is related to y approximation. – ja72 May 18 '12 at 21:21 ## 2 Answers Ramanujan found this approximation, among many others, according to Wolfram MathWorld equation 21 in linked page. - Perfect! Exactly what I was looking for. – ja72 May 18 '12 at 21:29 I have not seen it before. Note that $\pi = \sqrt{a} + a$ where $a = (1+2\,\pi -\sqrt {1+4\,\pi })/2$, and what you're saying is that a rational approximation of $a$ is $9/5$. In fact, we have a continued fraction $$a = 1 + \dfrac{1}{1 + \dfrac{1}{3+ \dfrac{1}{1+\dfrac{1}{1139 + \ldots}}}}$$ and $1+1/(1+1/(3+1/1)) = 9/5$. The fact that the first omitted element, $1139$, is so large makes this a very good approximation: the error in approximating $a$ by $9/5$ is only about $3.5 \times 10^{-5}$. Four elements later comes $7574$, so an even better approximation is $1+1/(1+1/(3+1/(1+1/(1139+1/(1+1/(15+1/1)))))) = 174530/96963$ with error about $1.4 \times 10^{-14}$. EDIT: Perhaps even more remarkable are $$\eqalign{\pi - \sqrt{1 + \dfrac{47}{35} \pi} &\approx \dfrac{6}{7}\cr \pi - \sqrt{\dfrac{3}{5} + \dfrac{5}{2} \pi } &\approx \dfrac{216}{923}\cr}$$ corresponding to the continued fractions $$\eqalign{\pi - \sqrt{1 + \dfrac{47}{35} \pi} &= \dfrac{1}{1+ \dfrac{1}{6 + \dfrac{1}{126402+ \ldots}}}\cr \pi - \sqrt{\dfrac{3}{5} + \dfrac{5}{2} \pi} &= \dfrac{1}{4+\dfrac{1}{3+\dfrac{1}{1+\dfrac{1}{1+\dfrac{1}{1+\dfrac{1}{19+\dfrac{1}{133286+\ldots}}}}}}}\cr}$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9707184433937073, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/54354/how-does-locality-decouple-the-uv-and-ir-behaviour-of-a-qft?answertab=votes
# How does locality decouple the UV and IR behaviour of a QFT? I came a comment in this paper: Scattering Amplitudes and the positive grassmannian in the last paragraph of page 104 which says: "One of the most fundamental consequences of space-time locality is that the ultraviolet and infrared singularities are completely independent." How do I understand this? - ## 2 Answers On thing to keep in mind is that IR and UV divergences appear in different kinematical regimes: UV divergences are basically due to the fact that in loop integrals there are not sufficient propagators to make the integral fall off at infinity. E.g for a bubble integral $\int d^4l \frac{1}{l^2(l-p)^2}$ will be logarithmically divergent. Do for instance a Taylor expansion of this expression for the loop momentum becoming large then this becomes obvious. IR divergences however live in a completely different regime: they appear either when two particles becoming collinear $p_1\sim p_2$ or because some particles become soft $p_i\sim0$. Or put a little more condensed: UV: loop momentum becomes large IR: external momenta become collinear/soft. This is one way to see why these two kinds of divergence are not connected. Nima and company propably meant just this but in fancier terms. - Isn't your first integral (UV divergent example) diverging logarithmically and not linearly. – Learning is a mess Feb 19 at 9:19 You're right. Corrected :) Cheers. – A friendly helper Feb 19 at 9:21 2 I think the description is right, but I'm still not sure I understand the connection to the locality of the theory. Is there an easy way to see that, if I have a non local theory, this separation doesn't occur? – twistor59 Feb 19 at 12:56 Sorry for not commenting earlier, but I've been waiting to se eif there are any other responses. I have 2 questions with regards to this answer: 1. As @twistor59 says, I don't understand what role locality plays – Siva Mar 15 at 4:30 Secondly, say we have a function $f(x)=\frac{1}{x^k}$. Then, in $n$ dimensions, the asymptotic behaviour ("divergence") of the integral $\int f(x) d^n x$ in the IR and UV is related (and complementary). So why should the divergences not be related? – Siva Mar 15 at 4:36 This quote shows a funny way how mathematicians perceive and describe what they see. In fact, UV and IR singularities are to a great extent of different nature, but they both are results of a bad formulation of theory. They both are absent in a good formulation. UV singularities are due to "self-action" contained in the interaction term. This interaction term part is wrong and nobody keeps the UV result intact. Rather, it is discarded (subtracted). Thus, the theory is modified. Is the theory local after renormalization - that's the first question to the authors. After subtracting wrong self-action contributions, the reminder is "good", but, considered by the perturbation theory, it gives divergent results for low-frequency modes. This result is correct - push a low-frequency oscillator with a strong force, and you will obtain huge oscillations proportional to $1/\omega$. When you have many modes in a superposition, each "soft oscillator" has large amplitude, but after summation (inclusive picture), they give a finite resulting amplitude. These modes are obligatorily all taken into account, not discarded. Only an inclusive picture is meaningful because the purely elastic cross section is zero and each particular mode diverges. Thus, one needs to sum up soft mode contributions to all orders, and that is equivalent to another initial approximation and another perturbative series, i.e., another formulation of theory. You see, one has to work hard with the original "local" theory formulation in order to arrive at physical results. Is the resulting theory "local" after renormalization and summation of the soft modes - that's the second question to the authors. There are indications that a correct theory formulation is somewhat "non local" (see a popular explanation here or here). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374367594718933, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/15028?sort=oldest
## Do you find your students are less competent in basic algebra and arithmetic, and, if so, do you believe that this is due to overuse of calculators at an early level? [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) So first I gave my class the quiz problem: Compute $$\lim_{h\rightarrow 0} \frac{\frac{1}{3+h} - \frac{1}{3}}{h}.$$ Upon finding that they could not do that (no real surprize) I asked them to compute $\frac{1}{3.01}-\frac{1}{3}$ in hopes that they would recognize the kernel of the former problem in the latter, and in hopes of indicating that it is perfectly reasonable for an entering college student to be able to add fractions. A disturbingly large number of students could not perform the latter arithmetical calculation even though i had made comments about how to add fractions within class. I imagine my experience is not unique. And I think that the current forum may have a sufficiently large readership to deliver an informed opinion about whether or not calculator use is inhibiting algebraic ability. If it is not the calculator, then what is the cause? - 26 Calculator use does not inhibit algebraic ability. Failure to be taught algebraic abilities does.... – Mariano Suárez-Alvarez Feb 11 2010 at 20:46 6 I think the main cause is just like Mariano said; it's that there's not much of an emphasis in many courses on calculation these days. I blame it on the "it's better to know concepts" trend; as if concepts could be somehow separated from getting your hands dirty... – Jason Polak Feb 11 2010 at 21:05 42 Here's my take on this sort of question. I think professional mathematicians can easily slip into "grumpy old man" syndrome, usually of the form "kids these days seem much less bright than kids x years ago" when x years ago is when the speaker was a kid. I think this is just a myth. When I was a kid I was better at maths than most people around me, but that's no surprise because it's me that became the professional mathematician. I think my view of "what kids aged 19 should know" is indelibly tainted by what I knew at 19. I think you just have to step back and realise "you're not the norm". – Kevin Buzzard Feb 11 2010 at 23:49 10 I hope that this question does not become (1) too discussiony, or (2) devoid of any actual data. Alas, I fear that questions like this are bound to become (1,2). – Theo Johnson-Freyd Feb 11 2010 at 23:59 3 I agree with Kevin and Theo above. Next year I will be teaching elementary and middle school teachers how to teach math so this question interests me, as a Math Overflow question it is inappropriate. Cause and effect in education is a very subtle topic. – Jason Dyer Feb 12 2010 at 1:09 show 2 more comments ## 13 Answers Do I ever?! (to your first question) I do not know the cause. Of course, I'm in the UK and therefore our mileages will certainly vary. I suspect, though, that in the end the problem is simply lack of practice. It is alas not uncommon for our students (at least in exam conditions) to be unable to successfully finish a problem, not because they do not understand how to go about solving it, but because once they've done the hard stuff, they are bogged down by what ought to be simple arithmetic, trigonometry,... I have talked at length about this problem with colleagues at my university and several explanations were proffered: 1. Students do not necessarily learn any maths in School: they learn to pass exams. (I think that "maths" can be replaced by pretty much anything else, and you'll still get a true sentence.) 2. Exams are much more modular now in the UK. It used to be that students were examined precisely twice during the four years of "high school": once at the end of the first two years (so-called "ordinary level") and once at the end of the last two years (so-called "advanced levels"). (Strictly speaking this is in England and Wales. In Scotland the system has always been different.) Now one can get examined on less material which seems to favour "cramming". 3. Students do not spend nearly enough time solving the sort of routine problems which hone their calculational skills. I am not sure whether one can blame this on the use of calculators. At any rate, I agree that it is a disturbing trend and one we would like to reverse. If anyone has had any success at this, please share your thoughts!!! - 3 Well, being bogged down by simple arithmetic, trigonometry and friends is part of the problem: they are also bogged down by their inability to properly read and understand the problem descriptions, and to write down an understandable answer. – Mariano Suárez-Alvarez Feb 11 2010 at 21:21 2 Yes, this is true as well. There's a lack of understanding of basic mathematical language: students are often unsure of what a question is asking, but this they learn quicker. And I also agree that they often have difficulty writing a coherent answer with narrative and not just symbols scattered on the page. To their credit, they get better by their 4th year. – José Figueroa-O'Farrill Feb 11 2010 at 21:25 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. (Turning what was originally a comment into an answer, for I think it is the answer.) Calculator use does not inhibit algebraic ability: failure to be taught algebraic abilities does.... - 4 I'm not sure you can be taught algebraic manipulation/ability. I believe that this is a question of fluency, like in a language. You only become fluent by practising it. – José Figueroa-O'Farrill Feb 11 2010 at 21:21 3 So - duh - you teach algebraic manipulation by giving your students plenty of practice at it. – TonyK Feb 11 2010 at 21:28 2 OK -- maybe it's semantics, but I don't call that teaching. For instance, I learnt how to integrate by doing many integrals. I don't remember being "taught" to integrate, although I do remember discussing integration tricks with my cohort. – José Figueroa-O'Farrill Feb 11 2010 at 23:12 I think that practice is important, but it has to be practice which includes actual thought. If you are having people mindlessly apply algorithms you will just deaden them. People are not computers. If you are not thinking about distributing when you are practicing the multiplication algorithm, you are wasting your time. If you don't understand the place value system well enough to understand what is really going on in the algorithm for addition, and you are not using that understanding to guide you calculations, then it will not be surprising when you think math is a BORING FORMAL GAME. – Steven Gubkin Feb 12 2010 at 12:42 Students are not being taught that mathematics is something you can think about: it is being taught as a mindless system of rules. I would guess that less than 5% of people in the world can really explain why (3/5)(2/7) = 6/35. There is a very clear picture which explains why, but no one ever gets taught this picture. Try asking any of your students why this is true: the answer you will get is "because that's the rule". - 2 It only gets worse after a typical AP calculus class. How many students actually understand the product, quotient, or chain rules? l'Hopital's rule? – Qiaochu Yuan Feb 11 2010 at 21:56 3 Only those who are going to go on to be mathematicians. – David Mandell Freeman Feb 11 2010 at 22:21 3 Exactly! And the problem here seems to be that the people teaching just don't know, because they often don't know any math at all! Surely, most of my grade school teachers did not know math at a level above the error-ridden textbooks, and did not know the context that "definitions" come about in, nor where the rules come from, or how they are related to them. I know I relied on calculators a lot in school, but once I started learning why things worked, doing calculations myself was trivial, and calculators could actually be helpful to discovering new things by myself. – jeremy Feb 11 2010 at 22:26 There is some hope. My kids (grades 1 and 3) are in a school district that uses this thing called "Everyday Math" (somewhat of a misnomer). I was warned by a chemist friend of mine that it's a terrible system, but I think I have come to a different conclusion. I think it helps develops the exact skills we thought were disappearing. Unfortunately, we're hitting rock bottom with the next few in-coming classes of college freshman. But I think (I hope) it gets better from there. – Ian Durham Feb 12 2010 at 2:47 I have the same issue with business students in my class so I guess the problem is more widespread than just math undergrads. In order to combat the issue, I re-designed my course so that repetition is the key theme. In other words, the same concept/formula is emphasized via in-class examples that are solved by me, via out-of-class graded assignments, via in-class ungraded assignments and sample exams. Students are allowed to work with each other on assignments and sample exams but the exams are individual exams. My hope is that repeated use of the same formula/concept in different contexts and allowing them to talk to each other for assignments/sample exams will help them internalize the ability to answer questions involving basic algebra/arithmetic on the exam as well. I am not sure to what extent my answer generalizes to math or if it gives you any ideas for your own course. Their performance on the first exam (scheduled for next week) will probably tell me if my approach is working or not. - For what it's worth, there is a fairly specific villain to blame for this problem in the school district where I attended high school. In this district - which is not the district I grew up in - there is an awful math curriculum called CORE which is taught from first grade on and which emphasizes students "discovering" concepts on their own, etc. in place of teaching them basic skills. My understanding is that this is a holdover from reaction to (?) the "New Math" movement, and as far as I can tell, what it produces are college students who cannot divide 42 by 7 without a calculator. (I experienced this while tutoring an otherwise very bright friend of mine in calculus, and while the calculator plays a pernicious role in this story I don't think it's the culprit.) So at least where I come from, the problem - at least as it seems to me - is that the curriculum has changed for the worse. I don't know how serious an issue this is in other parts of America or in other countries, though. - 3 I was subject to New Math, and I do not recall having had to discover anything... – Mariano Suárez-Alvarez Feb 11 2010 at 22:05 I think the problem here is that, to do that kind of thing well, you have to actually know math at a pretty intimate level to be able to properly guide students through it. This works very well when, e.g., I tutor people to supplement what they're learning in classes, but is a terrible failure when someone who doesn't understand how everything is related to everything else (in other words, all primary and secondary school math teachers) tries it. – jeremy Feb 11 2010 at 22:28 Many of my courses were "new math." I learned to manipulate rational functions before learning to add fractions, but once I learned the latter (which I perceived as more difficult because I had not developed the facility with factorizations), I was able to see in retrospect, they were the same idea. – Scott Carter Feb 12 2010 at 0:49 1 My apologies; yes, I think I was a little confused. The point is that I think there was some sort of ideological impetus for this program to exist, and that it was a bad idea. – Qiaochu Yuan Feb 12 2010 at 7:07 3 This "discovering math on your own"-thingie is the latest fad here in Germany; I still have to find a difference between students whom I let "discover" the product formula or the power series expansion of the exponential function on their own and those who were taught the "traditional" way: they don't differ in performance, and not in attitude towards mathematics. And whereas this method may work (if you have enough time on your hand, that is) for 17 year olds, it does not work for 9 year old kids if "working" means being able to work out 42 : 7 without a calculator. – Franz Lemmermeyer Feb 12 2010 at 19:28 show 1 more comment At least in some locations in the US, the problem is summed up in this video. Students in some places aren't being TAUGHT arithmetic anymore. At all. Add to that the general lack of computational repetition that's been trending upwards these days, and I think that this sort of thing explains a large fraction of the problem. - 5 Her problem is that she doesn't understand the stick method or the lattice method for computing products. Since she doesn't understand, she critiques them. Using four crossed sticks is pretty amazing, and A SKILLED elementary school teach could train students to learn that (a+b)(c+d)=ac+bc+ad+bd by using sticks. A SKILLED teacher could go from multiplying to represent area to sticks. Too often elementary school teachers hate math and have no deep understanding. – Scott Carter Feb 11 2010 at 23:15 1 But as we know, methods that are excellent in the hands of a skilled teacher can be disastrous in the hands of an incompetent or average one, and we have a majority of the latter (re:math, especially) in elementary and secondary education. I also found the books' lack of faith disturbing. – Charles Siegel Feb 12 2010 at 1:47 I really liked the "focus" multiplication algorithm - I think a student would have a better chance of understanding what was going on that way. If you ask the average child why they "carry", or why they line up numbers the way they do in the algorithm, they will have no idea. What is the point of having people remember these algorithms if there is no understanding of how they work? To steal Lockhart's words: What is the point of having a whole generation walking around with "Minus b plus or minus root b squared minus four a c divided by 2 a" floating around in their head? – Steven Gubkin Feb 12 2010 at 14:14 I've not made my mind up about this, and I've had the privilege of teaching little, and teaching students with elite educations who did not have these problems. From my own recollection, being drilled in arithmetic made me bored and rebellious in class. Polynomials got my attention, and transformed the way I did mental arithmetic. The thing about constructivist pedagogy is that I can see how it might be successful, and produce enthusiastic, resourceful students, but it seems to place huge demands on teachers to get that result. We can't fill any country's classrooms with hundreds of thousands of Seymour Papperts. So choice of curriculum has to be pragmatic, fitting the needs of the students you have with the abilities of the teachers you have. And throwing out today's practice because of the dream of a brighter tomorrow sounds unlikely to work out well. But I don't know enough to feel I can pass judgement. No doubt when my daughters reach school age, I will feel differently. - Am I the only person here who thinks that new math had some appeal? – Harry Gindi Feb 12 2010 at 7:32 1 What is new math for those living outside USA? – Andrea Ferretti Feb 12 2010 at 8:56 5 "New Math" was an attempt in the 1960s US to catch up to the allegedly-superior Soviet mathematics education. The idea was to children based on the way that mathematicians organized it -- for example, you would teach students basic set theory, logic, algebra, and so on, so that when they learned concrete arithmetic and geometrical skills they would be able to see how these tools fit together. With a mathematics teacher who really understood mathematics, this was often wonderful. Unfortunately, many didn't, and led to catastrophic algebraic muddle plus a failure to learn basic computations. – Neel Krishnaswami Feb 12 2010 at 10:52 Basically everything concerning education that happens in the US reaches Germany with a time lag of about 10 years. We got our "New Math" in the early seventies, with the same (predictable) results. – Franz Lemmermeyer Feb 12 2010 at 22:10 1 New Math is amusingly presented in a Tom Lehrer song: youtube.com/watch?v=SXx2VVSWDMo – Elizabeth S. Q. Goodman Feb 29 2012 at 6:57 show 1 more comment I did not want to answer this question at first, but now that we end up with teacher bashing let me put in my 2 cents. Facts first: I teach kids between 13 and 19 (grade 7 to 13), and I do know some maths; so do my colleagues, although they know somewhat less than I do, of course. I have no problems explaining why we add or multiply fractions the way we do, or why the product rule holds. I think it is idiotic to assume that problems with simple arithmetic magically disappear when we start explaining the mathematics behind it, and in this respect there is no difference between the addition of fractions or the product rule: your explanations reach only 5 % of the class, the rest will patiently wait for the recipe. In fact I think the problems start when instead of teaching children how to add an substract, we try to make them understand why the algorithms work. I don't think I cared a lot about the fine points of the decimal system when I was 11, and I don't think that today's kids do. Problems with calculators set in when the kids are 13 upwards; simple arithmetic is not supposed to be trained because the calculators can do it. By the time they graduate from high school (gymnasium in Germany), my students can form the derivative of $e^{\sin x}$ without problems, but many of them have problems if they have to manipulate $\frac1x - \frac1{x-1}$. The main problem I am having with our current approach (over here) is that, at least in my neck of the woods, being able to use the calculator (TI 83) is compulsory for graduating. So yes, the problem is not the existence of the calculator, it is the teaching; but: the curriculum is designed in such a way that basic arithmetic plays only a minor role, and the reason why it is designed that way is -- the calculator. - 2 "I think the problems start when instead of teaching children how to add an substract, we try to make them understand why the algorithms work" - This is madness! I simply cannot express how fundamentally we disagree. – Steven Gubkin Feb 12 2010 at 13:24 11 I am not sure whether what you call "people" are the children I'm trying to teach. I can't name a single person who would be interested in why long multiplication works before he has figured out how it works. As for myself, I didn't understand what a class group is until I had computed several dozen of them. Now I might be abstraction impaired, but so are the kids I know. They don't understand Dedekind cuts but happily multiply the square roots of 2 and 3. – Franz Lemmermeyer Feb 12 2010 at 13:39 1 Steven, while I have written about what you speak (hotchalk.com/mydesk/index.php/… ), you have a very generalized idea of what students like. The are an enormous number of strategies developed to reckon with the problem, but one can never presume teachers just aren't trying. As David Cox wrote recently, "I do my best to make my students think, but they still try to become good little algorithm followers." (coxmath.blogspot.com/2010/02/… ) – Jason Dyer Feb 12 2010 at 14:17 8 If I could teach students one at a time instead of 30, I too could do incredible things. Oh, and I don't like your jumping to conclusions about what I do and what I don't do in class. – Franz Lemmermeyer Feb 12 2010 at 16:27 3 I totally agree with Franz! (and being realistic, as an analogy: vastly more people want to learn how to drive, than to the entire body of science, math, and engineering, that makes the driving possible). Franz is being very realistic---and full respect to him for it. – S. Sra Mar 1 2012 at 4:12 show 9 more comments Pure Mathematics is some kind of art. You cannot teach art: You may teach history of art, You may teach what works was created by some art-enabled people. So in Math is the same. If You want to educate new mathematicians You should follow with guidelines which are used in Art Academias. They are good: they accept creativity, fresh look etc. whilst still they teach about history of art, some movements, achievements of the future etc. Look: real artist is also great craftsman in his area! In Math there is also practical aspect: applied math. Here You have to learn more crafts than art, but sometimes art goes from the air. The only solution to problem You mentioned is to learn math in a way which is seen as interesting and important for Young people. They have ability to learn much more complicated things than calculus or even abstract algebra. If You try to play one of this newest computer games You will see that it is sometimes more difficult than proof of something maybe even non obvious. But it is different: it creates emotions. Try to create emotions learning math and You will be welcomed by them, and they will be glad to learn math! That is the way our society works: we like to entertain ourself! - 2 This issue is a huge component in what I have observed of kids' disengagement from mathematics, not only k-12 but college and grad students in math: first, as this answer notes, people are quite able to do complicated things when_they_care. Second, presenting math as at best puzzles, at worst drills, often quite disconnected from the rest of life, is not nearly as compelling as anything else. I fear there is really no solution, since some of that seeming disconnectedness is the very power of mathematics... A difficult subject to teach. – paul garrett Feb 29 2012 at 17:52 I think the use of calculators at an early level is a great thing. For one thing, calculators give kids a sense that math actually works, a solid thing that can be checked and thus grasped without guidance. Being only 25 myself, I don't know how that felt for disinterested students back in the days of slide rules. Most practical curricula only teach how to repeat math, that is, to follow well-known recipes in order to find answers people need, and maybe we are seeing disheartening effects of lack of practice doing algebra and arithmetic. I think most people only use the math they've really practiced and feel comfortable with. But if you think about the benefits of playing with math when it comes to pattern recognition and developing logical ideas, then perhaps the more time spent on calculators and other toys, the better. - 1 Not a fan of Asimov's The Feeling of Power, then? ;) – Yemon Choi Feb 29 2012 at 9:11 2 (looks up reference) That's a rather extreme idea. To be clear, what I'm really in favor of are interactive calculational tools and/or games. For example, a derivative-taking program that allows you to graph a function, pick a point by clicking, construct any secant line by clicking another point, and calculating its slope. I'm talking about tools that present mathematical concepts in terms of actually doing the math, rather than formulas. Formulas are a time-saving way of representing mathematics; computer-assisted graphics are too, and less hard to interpret. – Elizabeth S. Q. Goodman Feb 29 2012 at 19:23 I've had previous colleagues say good things about judicious use of GeoGebra and the like. (Though I've not tried such things myself.) – Yemon Choi Feb 29 2012 at 23:52 I feel both human-ability and technological-assitance should go hand-in-hand. We have to give equal importance to making students use a calculator and also learning how to do it by hand. I also feel we should encourage students to use softwares like Mathematica and Matlab. Otherwise, what advantage does a future mathematician have over old-timers! With this background, I feel there should be a clear emphasis on the interpretation of the results a student obtains on performing a calculation. ```` 'The purpose of computing is insight, not numbers.' -Hamming. ```` For example, we can use the series $\frac{1}{1+x} = 1-x+x^{2}+\cdots,$ for $|x|<1$ to demonstrate the fact that if $|x|<<1$ (|x| is far far less than one) then $\frac{1}{1+x} \approx 1-x$ and show the results in a calculator. Say, $(1.001)^{-1}$ can be easily seen without the use of calculator as approximately equal $1-0.001=0.999.$ Division problem can be turned into a simple subtraction problem. After showing algebraic manipulation, we can show the calculator result and ask students to interpret the precision and give a good explanation. We could also enhance Mathlete competitions and make students learn to calculate mentally faster than a calculator, for which we need calculators! - [original answer by Chris Leary; tidied slightly by YC] I am in sympathy with Kevin Buzzard's opinion that we mathematicians can become "grumpy old men." For several years (I was perhaps very naive), I labored under the belief that my students had a secondary school math background similar to mine. I have abandoned that belief. I have been at the same college now for over 25 years. I have noticed a decline in the preparation, but mostly in attitude, among our recent students. I wish I could say why this is the case, but I can't really. As far as technology is concerned, I remember an article published in some journal on technology in math education. The article appeared during the height of the calculus reform movement in the US and was based on the authors' experiences at Oklahoma State. One of their conclusions was that, in the hands of talented students, calculators et al enhance students performance, but for less talented students, and I still remember the phrase, technology "adds one more layer of obfuscation" between the student and the material. I believe there is something fundamentally wrong with how the US mathematics educational system functions in primary and secondary school. I don't think technology itself is the main culprit. How the technology is used is crucial. A bigger problem is teacher preparation. My college has a school of education and the struggles of the elementary education people with mathematics are legendary. They actively resist learning anything about the math they will be teaching and only want to learn algorithms for solving problems. Even prospective secondary school teachers are not immune. A former student of mine in abstract algebra was incensed at having to learn about factoring polynomials, claiming that she was going to be a teacher, already knew how to factor, and didn't see any value in learning about polynomial rings. Unfortunately, she displayed an amazing inability to factor quadratics on an exam. So student attitudes are sometimes working against us as well. What's wrong, and how to fix it, are not simple questions. I think there is a complex mixture here. Technology is a convenient target (and the crticism is not wholly unjustified). However, educational philosophy and policy, and societal factors, probably play a significant role as well. I'll stop here, because the more I think about these issues, the more discouraged I become. - The OP asks for comments from university-level professors on whether (a) they have seen a decreasing trend in arithmetic skills among their students over time, and whether (b) such a trend might be attributable to the use of calculators. 1973 was roughly the last year in which one could teach freshman calculus to a group of students who had not been exposed to electronic calculators. Anyone who was teaching freshman calc in 1973 is at least ~64 years old, so at most we will have a very thin cohort of teachers who can comment on how their own students in 2012 compare to their past students who used slide rules. It's also very risky to use anecdotal or subjective evidence to measure such trends. The best objective evidence that I'm aware of is in a book called Academically Adrift, Arum and Roksa, 2011. The authors document that certain downward trends have indeed occurred over the last 50 years. Two such trends are a marked decrease in the time students spend studying and a decrease in the amount of improvement in critical thinking and writing skills that occurs while students are in college. These trends persist even when one controls for such factors as the greater percentage of the population that now attends college. I have been teaching physics at a community college in California since 1996. In my experience the main difference between students who have taken a calculus course and those who haven't is an increased probability that they will be fluent in basic arithmetic and algebra (e.g., being able to solve a=b/c for the variable c). This may indicate that they can't pass calculus with a C without these skills, or it may be an example of self-selection. I find that very few students who have passed calculus can do any of the following without extensive coaching and remediation: Differentiate or integrate any function that is expressed in terms of variables other than x and y. Differentiate or integrate an expression containing symbolic constants. State the geometrical interpretations of the integral and derivative. Find the value of $x$ that maximizes $-x^2+x-2$. State under what circumstances $\Delta y/\Delta x$ is a valid measure of a rate of change, and under what circumstances $dy/dx$ is needed instead. Determine whether a car's odometer performs differentiation, or integration. In other words, if we label the courses in a college-level math sequence with successive integers, what I find is that students who have passed course $n$ can only be counted on to display some level of competence in the material covered in course $n-3$ or $n-4$. - I would like to know what kind of "prerequisite exams" you give/have given to your students. Such a simple test of competence before embarking on a study of calculus not only could serve as a measure of what to study, it could serve as a handy cheat sheet of what algebraic manipulations to perform for various problems and how to check ones work. Gerhard "Ask Me About System Design" Paseman, 2012.02.29 – Gerhard Paseman Mar 1 2012 at 0:32 @Gerhard Paseman: My only relevant experience is that ca. 1997, we wrote a diagnostic exam for the students in our algebra-based physics class. The test was mainly about area, volume, ratios, and proportionalities. E.g., if $y$ is proportional to $x^2$, and $x$ goes up by a factor of 3. By what factor does $y$ change? Most couldn't even decode the statement of the question. Many also were unable to distinguish area and volume, e.g., to tell which was more relevant when deciding how much paint was needed to paint a car. We didn't find much correlation between the exam and success in the course. – Ben Crowell Mar 1 2012 at 0:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9713359475135803, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/66288/sylow-subgroups-invariant-under-an-automorphism/66349
## Sylow subgroups invariant under an automorphism ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be a finite group and $\sigma$ an automorphism of $G$. Suppose $p$ is a prime and $\sigma$ has prime order $q \neq p$. It's easy to see that $\sigma$ fixes a Sylow $p$-subgroup of $G$ if $\sigma$ is fixed-point free or if $q$ does not divide the order of $G$. Suppose that $q$ does divide the order of $G$. What reasonable assumptions on $G$ or on $H$, the fixed point subgroup of $\sigma$, would we have to make to guarantee that $\sigma$ fixes a Sylow $p$-subgroup of $G$? Any ideas? Thanks. - If asking for $\sigma$ to fix a Sylow $p$-subgroup is too much to hope for, are there any conditions which guarantee that $\sigma$ will even fix a non-trivial $p$-group? – AJB May 28 2011 at 18:42 2 The normalizer of a Sylow $p$-subgroup of $G$ containing a Sylow $q$-subgroup of $G$ (or, equivalently, the number of Sylow $p$-subgroups being coprime to $q$) would be sufficient. – Derek Holt May 28 2011 at 19:43 Thanks. This would work. If it happens that $q$ does not divide the number of Sylow $p$-subgroups then $\sigma$ will fix a Sylow $p$-subgroup. What if $q$ does divide the number of Sylow $p$-subgroups? Are there any conditions we can impose to guarantee that $\sigma$ fixes any non-trivial $p$-subgroup of $G$? – AJB May 28 2011 at 20:21 ## 1 Answer This is a very general question, perhaps too general for a definitive answer, and as Jack Schmidt pointed out (in a now deleted comment), it is already a delicate question when $\sigma$ is an inner automorphism. There are bad examples (a little different from Jack's) for all symmetric groups of prime degree greater than $3$. If $p$ is a prime, and $G$ is the symmetric group `$S_{p}$`, then a Sylow $p$-subgroup $P$ of $G$ is self centralizing of order $p$, so `$N_{G}(P)$` has order dividing $p(p-1)$. In fact the order is $p(p-1)$. Hence the only elements of order prime to $p$ which normalize a Sylow $p$-subgroup are powers of a $p-1$-cycle. Such elements (apart from the identity) have a unique fixed point, and all other cycles of equal length dividing $p-1$. There are many elements of prime order $q \neq p$ in `$S_p$` which are not of this form, for example any element $\sigma$ of prime order $q$ dividing $p-2$. Another type of example is provided by `${\rm GL}(n,p)$`. If we take a prime $q$ such that $q$ divides `$p^{n}-1$` but does not divide `$p^{m}-1$` for any `$m <n$`, then `${\rm GL}(n,p)$` contains an element $\sigma$ of order $q$ which must act irreducibly on the natural module. Hence $\sigma$ can not normalize any non-trivial $p$-subgroup $P$ of `${\rm GL}(n,p)$`, for if it did, it would stabilize the space of fixed points of $P$, which is proper and non-zero. Note that $P$ can be made as large as desired by making $n$ large enough (though the choice of $q$ will need to vary). In a positive direction, this question does relate to some rather deep theorems in finite group theory. One of these is the Thompson transitivity theorem (which I will call TTT for short). This can be found in Gorenstein's book Finite Groups (Chelsea). Most theorems of this type (and there are several others) require the presence of larger elementary Abelian subgroups. A weak form of the TTT states that if a Sylow $q$-subgroup $Q$ of $G$ contains a maximal Abelian normal subgroup $A$ with 3 or more generators, and such that `$C_G(a)$`is solvable for each non-identity element $a$ of $A$, then all maximal $A$-invariant $p$-subgroups of $G$ are conjugate via an element of `$O_{q'}(C_{G}(A))$` ( $p$ a prime different from $q$). In particular, the number of such subgroups is prime to $q$. Hence, for example, if we assume $Q$ is $\sigma$-invariant (which we may, possibly after replacing $\sigma$ by an $H$-conjugate, where $H$ is the semi-direct product `$G\langle \sigma \rangle$`), then $Q$ will permute the maximal $A$-invariant $p$-subgroups by conjugation. The number of these is prime to $q$, so the number fixed by $Q$ (under conjugation) is prime to $q$. Those fixed by $Q$ are the maximal $Q$-invariant $p$-subgroups of $G$. Since $\sigma$ normalizes $Q$, these are in turn permuted by $\sigma$ under conjugation. Since their number is prime to $q$, one of them must be fixed by $\sigma$. Hence if $A$ normalizes a non-trivial $p$-subgroup of $G$, so does $\sigma$. - Thank you for the information. – AJB May 30 2011 at 21:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 104, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365079998970032, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=454706
Physics Forums ## Equitable Coin Flip Game This seems to be a fairly simple probability question but it's stumping me for some reason. "You flip a fair coin until you get a head and win x dollars, where x is the number of flips it takes to get a head. (e.g. H = win $1; TH = win$2; TTH = win \$3, and so on.) How much should you pay to play this game for it to be equitable (each player's expectation is equal to 0)?" I'm really not sure if the answer is supposed to be a specific value or a variable, although my guess would be a specific dollar amount. I tried setting up a few equations like (1/2)(x)+(1/2)(L)=0 where x is the dollar amount and L is a lose, but that only gave me L=-x which didn't make any sense to me. I also tried setting it up as a series that produced x=0,-1,-1/2 which didn't seem to make sense either. I feel like I'm making it more complicated than it has to be, but I have no idea what to do. Anyway, any help would be greatly appreciated. Thanks! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Quote by royboyz12 This seems to be a fairly simple probability question but it's stumping me for some reason. "You flip a fair coin until you get a head and win x dollars, where x is the number of flips it takes to get a head. (e.g. H = win $1; TH = win$2; TTH = win \$3, and so on.) How much should you pay to play this game for it to be equitable (each player's expectation is equal to 0)?" I'm really not sure if the answer is supposed to be a specific value or a variable, although my guess would be a specific dollar amount. I tried setting up a few equations like (1/2)(x)+(1/2)(L)=0 where x is the dollar amount and L is a lose, but that only gave me L=-x which didn't make any sense to me. I also tried setting it up as a series that produced x=0,-1,-1/2 which didn't seem to make sense either. I feel like I'm making it more complicated than it has to be, but I have no idea what to do. Anyway, any help would be greatly appreciated. Thanks! The number of flips taken to get a head is modelled by a geometric distribution with an obvious choice for its parameter (based on the fact that the coin is fair). The answer should be a specific value.. ok so then if I take the geometric distribution g(x;theta)=theta(1-theta)^(x-1) and plug in theta=(1/2), I get the geometric distribution is = (1/2)^x for x=1,2,3,... So now where would the equitable part come in? Since its obvious that there isn't a solution to (1/2)^x=0, then would I let x=0, which gives a solution of 1? So would \$1 be the answer? ## Equitable Coin Flip Game Nevermind...I spent well over an hour over-examining this, woke up this morning and it hit me...Forgot that I could just use the geometric distribution as a function to the find the expectation and then just set that equal to 0 Did you get \$2 as the answer? No i didn't....I probably did it wrong then. I set the geometric ditribution (1/2)^x equal to 0 and solved and got 0 as the answer...I figured that this made sense since the only way to have an expectation of 0 was to not play the game. Because when you play the game, the player is guaranteed to win money, but the probability of winning more money decreases over time. I seemed to make sense at the time. How did you get \$2? Blog Entries: 1 Recognitions: Homework Help Quote by royboyz12 No i didn't....I probably did it wrong then. I set the geometric ditribution (1/2)^x equal to 0 and solved and got 0 as the answer...I figured that this made sense since the only way to have an expectation of 0 was to not play the game. Because when you play the game, the player is guaranteed to win money, but the probability of winning more money decreases over time. I seemed to make sense at the time. How did you get \$2? You're not guaranteed to win money if it costs you money to play the game. If I pay ten bucks to play, and the flips come up TH, then I'm out 8 dollars. The question is how much should I be paying so that the expected amount that I make each game is zero If I understand the problem correctly, the player of this game always gets a return of at least $1. To get an equitable price to pay, I used a weighted average. The possible returns are distributed as follows: P($0) = 0 P($1) = 0.5 [H] P($2) = 0.25 [TH] P($3) - 0.125 [TTH] etc. You simply sum these. A 50% chance of winning$1, a 25% chance of winning $2 etc. (.5*1)+(.25*2)+(.125*3)+(.0625*4)+(...) =$2 Each player's (profit) expectation is equal to zero as on average the player paying $2 to play this game will win$2 per game. If you use your answer of zero, it would mean the (staking) player would win an average of $2 per game. The variable x is not related to the stake. He would be playing a riskless game where he wins money every time he plays. I have taken 'win amount' to mean 'return'. So that a payment of$2 and an outcome of H, would lead to a return of $1 (for a$1 loss overall). If he would literally win $1 (returning$2+\$1), the problem would not be solvable. Ohhh ok I see what your doing. I was somewhat close when I originally summed (1/2)(win)+(1/2)(lose), but the infinite sum case never occurred to me. I guess I was kinda reading the problem a little wrong. That makes much more sense now. Thanks. Tags coin, equitable, probability, statistics Thread Tools | | | | |-----------------------------------------------|--------------------------------------------|---------| | Similar Threads for: Equitable Coin Flip Game | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 0 | | | Set Theory, Logic, Probability, Statistics | 13 | | | Set Theory, Logic, Probability, Statistics | 3 | | | General Math | 18 | | | Set Theory, Logic, Probability, Statistics | 8 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9728471636772156, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/27602/krauss-operators-for-random-unitary?answertab=active
# Krauss operators for random unitary Suppose I have a density matrix $\rho$ and I act on it with a unitary matrix that is chosen randomly, and with even probability, from $S = \{ H_1, H_2 \ldots H_N \}$. I want to write the operation on the density matrix in Krauss form: $\rho^{\prime} = \sum_i O_i \rho O^{\dagger}_i$ Since the operator is chosen evenly, the probability of choosing any $H_i$ is $\frac{1}{N}$. What would be my choices for $O_i$? - 1 Just a little quibble; you're using $H$ to represent a unitary. Upon first glance $H$ suggests Hamiltonian. I'd recommend turning your $H_i$ into $U_i$. – Mark S. Everitt Mar 13 '12 at 1:37 ## 1 Answer One obvious choice is $$O_i = \frac{1}{\sqrt{N}}H_i.$$ There are many other choices. Perhaps you could elaborate some. - 1 If you are interested in using the unitary freedom of the Krauss representation you can re-express the $O_i$'s as $O_i' = \sum_{j} u_{ij}O_j$. Where $u_{ij}$ are entries in a unitary matrix $U$. – jonas Mar 12 '12 at 22:36 THanks, the form suggested in the answer is the one I am interested in, though your comment is also useful! – Ben Sprott Mar 13 '12 at 13:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426963925361633, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/64926/if-g-is-not-commutative-then-is-there-always-a-subgroup-that-is-not-a-normal-su
# If G is not commutative, then is there always a subgroup that is not a normal subgroup? I was having a discussion with a friend of mine about some normal group properties and then came up with the question "if G is not commutative, then is there always a subgroup that is not a normal subgroup?" It's probably more easy to solve this in the following form:$$\forall H \leq G : H \lhd G \Rightarrow \forall a,b \in G : ab=ba$$ My question is, can anybody give a proof, or a counter-example (because I don't think it holds) of this theorem? Thanks! - 1 – Pete L. Clark Sep 16 '11 at 2:45 @Pete: Good catch; but I think the question is stated better here, so maybe we should close the other one as a duplicate of this one? – Zev Chonoles♦ Sep 16 '11 at 6:38 See also this question: Can a non-abelian subgroup be such that the right cosets equal the left cosets?, and especially this answer (by Robin Chapman), and the comments. – Pierre-Yves Gaillard Sep 16 '11 at 11:40 @Zev: sure, what you suggest sounds quite reasonable. – Pete L. Clark Sep 16 '11 at 12:55 ## 1 Answer The answer is no. The Quaternion Group provides the smallest counter example. Another way to write your question is the following: "Does there exist a non Abelian group all whose subgroups are normal." Such counter examples to your above conjecture actually have a specific name, and can be completely classified. These are called Hamiltonian Groups. - thank you for the nice counterexample! – user12205 Sep 16 '11 at 11:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307430386543274, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/11150/zero-to-the-zero-power-is-00-1/105102
# Zero to the zero power - Is $0^0=1$? Could someone provide me with good explanation of why $0^0 = 1$? My train of thought: $x > 0$ $0^x = 0^{x-0} = 0^x/0^0$, so $0^0 = 0^x/0^x = ?$ Possible answers: 1. $0^0 * 0^x = 1 * 0^x$, so $0^0 = 1$ 2. $0^0 = 0^x/0^x = 0/0 = \text{undefined}$ Thank you PS. I've read the explanation on mathforum.org, but it isn't clear to me. - 3 @all: Do we really need tags like (powers) and (zeros)? I think they are uninformative and should not be used. Also there are only 7 questions tagged with at least one of them. Let me know if I'm wrong. – Nuno Nov 21 '10 at 1:55 @Nuno: I agree in general, and in this case. I've edited the tags; Stas said in his comments on my answer that he meant the question in "terms of discrete mathematics", so the discrete-mathematics tag is appropriate, and (powers), (zeros), and (number-theory) are all *in*appropriate. – Arturo Magidin Nov 21 '10 at 3:15 1 Can you pls link to the explanation that you read on mathforum.org? – Lazer Nov 21 '10 at 6:48 1 +1 for good elementary question. – tia Nov 21 '10 at 8:41 2 BTW, this is comprehensively covered on Wikipedia (or see today's version), along with pointers to history and treatment in many systems. – ShreevatsaR Nov 22 '10 at 12:34 show 2 more comments ## 11 Answers In general, there is no good answer as to what $0^0$ "should" be, so it is usually left undefined. Basically, if you consider $x^y$ as a function of two variables, then there is no limit as $(x,y)\to(0,0)$ (with $x\geq 0$): if you approach along the line $y=0$, then you get $\lim\limits_{x\to 0^+} x^0 = \lim\limits_{x\to 0^+} 1 = 1$; so perhaps we should define $0^0=1$? Well, the problem is that if you approach along the line $x=0$, then you get $\lim\limits_{y\to 0^+}0^y = \lim\limits_{y\to 0^+} 0 = 0$. So should we define it $0^0=0$? Well, if you approach along other curves, you'll get other answers. Since $x^y = e^{y\ln(x)}$, if you approach along the curve $y=\frac{1}{\ln(x)}$, then you'll get a limit of $e$; if you approach along the curve $y=\frac{\ln(7)}{\ln(x)}$, then you get a limit of $7$. And so on. There is just no good answer from the analytic point of view. So, for calculus and algebra, we just don't want to give it any value, we just declare it undefined. However, from a set-theory point of view, there actually is one and only one sensible answer to what $0^0$ should be! In set theory, $A^B$ is the set of all functions from $B$ to $A$; and when $A$ and $B$ denote "size" (cardinalities), then the "$A^B$" is defined to be the size of the set of all functions from $A$ to $B$. In this context, $0$ is the empty set, so $0^0$ is the collection of all functions from the empty set to the empty set. And, as it turns out, there is one (and only one) function from the empty set to the empty set: the empty function. So the set $0^0$ has one and only one element, and therefore we must define $0^0$ as $1$. So if we are talking about cardinal exponentiation, then the only possible definition is $0^0=1$, and we define it that way, period. Added 2: the same holds in Discrete Mathematics, when we are mostly interested in "counting" things. In Dicrete Mathematics, $n^m$ represents the number of ways in which you can make $m$ selections out of $n$ possibilities, when repetitions are allowed and the order matters. (This is really the same thing as "maps from $\{1,2,\ldots,m\}$ to $\{1,2,\ldots,n\}$" when interpreted appropriately, so it is again the same thing as in set theory). So what should $0^0$ be? It should be the number of ways in which you can make no selections when you have no things to choose from. Well, there is exactly one way of doing that: just sit and do nothing! So we make $0^0$ equal to $1$, because that is the correct number of ways in which we can do the thing that $0^0$ represents. (This, as opposed to $0^1$, say, where you are required to make $1$ choice with nothing to choose from; in that case, you cannot do it, so the answer is that $0^1=0$). Your "train of thoughts" don't really work: If $x\neq 0$, then $0^x$ means "the number of ways to make $x$ choices from $0$ possibilities". This number is $0$. So for any number $k$, you have $k\cdot 0^x = 0 = 0^x$, hence you cannot say that the equation $0^0\cdot 0^x = 0^x$ suggests that $0^0$ "should" be $1$. The second argument also doesn't work because you cannot divide by $0$, which is what you get with $0^x$ when $x\neq 0$. So it really comes down to what you want $a^b$ to mean, and in discrete mathematics, when $a$ and $b$ are nonnegative integers, it's a count: it's the number of distinct ways in which you can do a certain thing (described above), and that leads necessarily to the definition that makes $0^0$ equal to $1$: because $1$ is the number of ways of making no selections from no choices. Coda. In the end, it is a matter of definition and utility. In Calculus and algebra, there is no reasonable definition (the closest you can come up with is trying to justify it via the binomial theorem or via power series, which I personally think is a bit weak), and it is far more useful to leave it undefined or indeterminate, since otherwise it would lead to all sorts of exceptions when dealing with the limit laws. In set theory, in discrete mathematics, etc., the definition $0^0=1$ is both useful and natural, so we define it that way in that context. For other contexts (such as the one mentioned in mathforum, when you are dealing exclusively with analytic functions where the problems with limits do not arise) there may be both natural and useful definitions. We basically define it (or fail to define it) in whichever way it is most useful and natural to do so for the context in question. For Discrete Mathematics, there is no question what that "useful and natural" way should be, so we define it that way. - 1 @Sivam: yes, in the sense that there is exactly one function from any infinite set to the one-element set. This does not say anything about the status of 1^{infty} as an indeterminate form. The two expressions mean different things in different contexts; that they are designated using the same notation is a convenience, not a necessity. – Qiaochu Yuan Nov 20 '10 at 23:27 3 @Sivam: exactly as Qiaochu says. Note that I said that $0^0=1$ in cardinal exponentiation is the only sensible answer, but "cardinal exponentiation" is not the same as real number exponentiation; when doing real number exponentiation, $0^0$ is most properly undefined/indeterminate. – Arturo Magidin Nov 20 '10 at 23:30 2 @Stas: You don't seem to have any "elementary case". All you have are your "train of thoughts". What case is it you are thinking about? You don't say. You don't tell us. I cannot read your mind from this afar away (and the Government doesn't let me do it without a Court order anyway...) – Arturo Magidin Nov 21 '10 at 2:51 7 Just a small note: the answer depends on whether you think of exponentiation as a discrete operation (as in set theory, algebra, combinatorics, number theory) or as a continuous operation over spaces like real/complex numbers (as in analysis). – Kaveh Nov 21 '10 at 5:47 2 @Stas: Some argue that; there are reasons for saying it should be one, but there are also compelling reasons for it to be other things. In some situations, some limits make more sense than others; if all you are concerned with are analytic functions, then it may make sense to define it to be 1 because in the only cases you will look at you will always get 1 as the limit. But precisely because the answer depends on context, it is left undefined in the abstract and only defined in certain specific contexts (such as combinatorics or cardinal exponentiation). – Arturo Magidin Nov 21 '10 at 19:36 show 29 more comments This is merely a definition, and can't be proved via standard algebra. However, two examples of places where it is convenient to assume this: 1) The binomial formula: $(x+y)^n=\sum_{k=0}^n {n\choose k}x^ky^{n-k}$. When you set $y=0$ (or $x=0$) you'll get a term of $0^0$ in the sum, which should be equal to 1 for the formula to work. 2) If $A,B$ are finite sets, then the set of all functions from $B$ to $A$, denoted $A^B$, is of cardinality $|A|^{|B|}$. When both $A$ and $B$ are the empty sets, there is still one function from $B$ to $A$, namely the empty function (a function is a collection of pairs satisfying some conditions; an empty collection is a legal function if the domain $B$ is empty). - 14 You don't need to appeal to the binomial formula. Anytime you write a polynomial as f(x) = sum a_i x^i you need x^0 = 1 to keep your notation consistent, so you need 0^0 = 1 so that f(0) = a_0. – Qiaochu Yuan Nov 21 '10 at 1:12 $0^0$ is undefined. It is an Indeterminate form. You might want to look at this post. Why is $1^{\infty}$ considered to be an indeterminate form As you said, $0^0$ has many possible interpretations and hence it is an indeterminate form. For instance, $\displaystyle \lim_{x \rightarrow 0^{+}} x^{x} = 1$. $\displaystyle \lim_{x \rightarrow 0^{+}} 0^{x} = 0$. $\displaystyle \lim_{x \rightarrow 0^{-}} 0^{x} =$ not defined. $\displaystyle \lim_{x \rightarrow 0} x^{0} = 1$. - 7 Is $\lim_{x\to0}0^x$ really defined? It can only be approached from the positive side. – KennyTM Nov 21 '10 at 19:52 @KennyTM: Accepted and edited accordingly. – user17762 Nov 22 '10 at 11:47 0^0 is undefined by whom? I saw somebody defined it, can I now say it is defined? – Anixx Nov 4 '12 at 22:58 $0^{0}$ is just one instance of an empty product, which means it is the multiplicative identity 1. - Some indeterminates forms $0^{0}, \displaystyle\frac{0}{0}, 1^{\infty}, \infty − \infty, \displaystyle\frac{\infty}{\infty}, 0 × \infty,$ and $\infty^{0}$ Futhermore, $$\lim_{ x \rightarrow 0+ }x^{0}=1$$ and $$\lim_{ x \rightarrow 0+ }0^{x}=0$$ See http://en.wikipedia.org/wiki/Indeterminate_form - I'm surprised that no one has mentioned the IEEE standard for $0^0$. Many computer programs will give $0^0=1$ because of this. This isn't a mathematical answer per se, but it's worth pointing out because of the increasingly computational nature of modern mathematics, so that one doesn't run afoul of anything. - The use of positive integer exponents appears in arithmetic as a shorthand notation for repeated multiplication. The notation is then extended in algebra to the case of zero exponent. The justification for such an extension is algebraic. Furthermore, in abstract algebra, if $G$ is a multiplicative monoid with identity $e$, and $x$ is an element of $G$, then $x^0$ is defined to be $e$. Now, the set of real numbers with multiplication is precisely such a monoid with $e=1$. Therefore, in the most abstract algebraic setting, $0^0=1$. Continuity of $x^y$ is irrelevant. While there are theorems that state that if $x_n \to x$ and $y_n \to y,$ then $(x_n + y_n) \to x+y$ and $(x_n)(y_n) \to xy$, there is no corresponding theorem that states that $(x_n)^(y_n) \to x^y$. I don't know why people keep beating this straw man to conclude that $0^0$ can't or shouldn't be defined. - 2 Downvote, because Onez focuses on a very narrow view of the exponentiation operation and its applications, and writes as if that is the only view. Continuity is of rather significant importance in a wide variety of situations, and the requirements of an exponentiation function of continuous arguments are rather different than those limited to integer or rational exponents, and $0^0$ runs into that difference. Another example where the needs differ are $(-1)^{1/3}$ – Hurkyl Oct 26 '11 at 22:06 1 I hardly consider the whole domain of algebra to be narrow. YMMV. In any case, when extending the domain of functions, one may ask: Is the extension useful? Defining 0^0 to be 1 is useful in Combinatorics, Set Theory, and Algebra. Indeed, it is even useful in calculus when using summation notation for polynomial functions and infinite series. Perhaps you care to list several advantages of leaving 0^0 undefined? Particularly in light of the fact that many definitions require the additional caveat that a,b,x,y etc. not be equal to zero in order for them to be true. – Onez Oct 27 '11 at 1:13 It's the view that I said was narrow, not the breadth of application. The advantage to leaving $0^0$ undefined is in situations where continuity is relevant. You might never be in such situations, but others are. Really, there are three separate exponentiation operations in common use -- the algebraic one which is mostly defined for all bases and integer exponents (often extensible to rational exponents), the real one which is defined for positive real bases and real exponents (or some continuous extension thereof), and the complex multi-valued one. $0^0$ only makes sense for the first. – Hurkyl Oct 27 '11 at 16:41 1 By your argument, we should not define (-2)^0 = 1, but leave it undefined since this extension of exponentiation fails your second case (non-positive base) and your third case (no continuous extension of x^y to all of C). You still haven't supplied any advantages to leaving 0^0 undefined. Economy of notation (the reason exponential notation was developed in the first place) is gained by defining 0^0 = 1. – Onez Oct 27 '11 at 21:11 1 Since 0 is not in the range of the exponential function, I take it you have issue with 0^y being defined even for y>0. The argument seems to hinge on whether one is to define 0^0=1 and economize several definitions and theorems from algebra, combinatorics, and analysis, at the expense of one caveat for a single function, OR to leave 0^0 undefined, have several caveats so as to preserve the continuity on the domain of definition of a single function, namely x^y. Where is the greatest economization to be had? Who has the narrow view? – Onez Oct 28 '11 at 3:29 show 4 more comments Take a look at WolframMathWorld's [1] discussion. See if this gives you any clarification. [1] Weisstein, Eric W. "Indeterminate." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/Indeterminate.html - I sent them a message to remove 0^0 from inderminate, with necessary proof. – wendy.krieger 2 days ago Knuth's answer is at least as good as any answer you're going to get here: http://arxiv.org/pdf/math/9205211v1.pdf See pp. 4-6, starting at the bottom of p. 4. - It's pretty straight forward to show that multiplying something by $x$ zero times leaves the number unchanged, regardless of the value of $x$, and thus $x^0$ is the identity element for all $x$, and thus equal to one. For the same reason, the sum of any empty list is zero, and the product is one. This is when a product or sum of an empty list is applied to a number, it leaves it unchanged. Thus if the product $\Pi()$ = 1, then we immediately see why $0! = 0^0 = 1$. Without this property, one could prove that $2=3$, by the ruse that there are zero zeros in the product on the left (zero is after all, a legitimate count), and thus $2*0^0$, and since $0^0$ as indeterminate, could be 1.5, and thus $2=3$. I think not. The approaches to $0^0$ by looking at $x^y$ from different directions, fails to realise that for even lines close to $x=0$, the line sharply sweeps up to 1 as it approaches $y=0$, and that the case for $x=0$, it may just be a case of not seeing it sweep up. On the other hand, looking from the other side, even in a diagonal line (ie $(ax)^x$), all do rapidly rise to 1, as x approaches 0. It's only when one approaches it from $0^x$ that you can't see it rising. So the evidence from the graph of $x^y$ is that $0^0$ is definitely 1, except when approached from $y=0$, when it appears to be zero. - Source: Understanding Exponents: Why does 0^0 = 1? (BetterExplained article) A useful analogy to explain the exponent operator of the form $a \cdot b^c$ is to make $a$ grow at the rate $b$ for time $c$. Expanding on that analogy, $0^0$ can be interpreted as $1\cdot0^0$ which is to say: grow $1$ at the rate of $0$ for time $0$. Since there is no growth (time is $0$), there is no change in the $1$ and the answer is $0^0=1$ Of course, this is just to grok and get an intuition or a feel for it. Science is provisional and so is math in certain areas. 0^0=1 is not always the most useful or relevant value at all times. Using limits or calculus or binomial theorems doesn't really give you an intuition of why this is so, but I hope this post made you understand why it is so and make you feel it from your spleen. - Downvoter explain. I am not trying to be rigorous, I'm just trying to give an intuition and make people truly feel why this should be right. – YatharthROCK May 1 at 14:21 ## protected by user17762May 14 '12 at 1:13 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 150, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457006454467773, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/experimental-physics+standard-model
# Tagged Questions 0answers 49 views ### List of cross sections? Sometimes I need to look up a certain cross section, say the inclusive Z production cross section at $\sqrt{s}$ = 7 TeV. Is there a place where 'all the' cross sections are tabulated ... 3answers 132 views ### How quark electric charge directly have been measured? How quarks electric charge directly have been measured when quarks never directly observed in isolation? (Due to a phenomenon known as color confinement.) 1answer 182 views ### Weinberg angle measurement methods I was reading up on the history of $W/Z$ bosons today and I got a little puzzled. I always assumed that people measured $M_Z$ and $M_W$ and then derived the Weinberg angle. But it appears that they ... 3answers 412 views ### How are the masses of unstable elementary particles measured? I am interested in knowing how (Q1) the particle's masses are experimentally determined from accelerator observations. What kind of particles? They must be as far as we know elementary and unstable ... 1answer 170 views ### Why not accurate masses of elementary particles? In the standard model of particle accuracy in calculating mass is very low. And you can not predict the upper limit of Higgs particle mass accurately. Why not accurate masses of elementary particles? 0answers 58 views ### How can one activate the decay of the quark b with PYTHIA event generator? This is my problem and I hope finding a solution. _In the simplest alternative, MSTJ(22) = 2, the comparison is based on the average lifetime, or rather (c*tau "time life") , measured in mm. Thus ... 1answer 462 views ### What's the Standard Model width of a 125 GeV Higgs? There's a fairly broad mass spread in the new results out of Atlas and CMS. I'm curious how this fits with the expected SM width. 2answers 69 views ### ATLAS Higgs Interpretation I came across this abstract, and I am curious as to what the ATLAS Team has actually discovered: Abstract Motivated by the result of the Higgs boson candidates at LEP with a mass of about ... 3answers 656 views ### Left and Right-handed fermions Is there a simple intuitive way to understand the difference between left-handed and right-handed fermions (electrons say)? How to experimentally distinguish between them? 2answers 1k views ### Did the researchers at Fermilab find a fifth force? Please consider the publication Invariant Mass Distribution of Jet Pairs Produced in Association with a W boson in $p\bar{p}$ Collisions at $\sqrt{s} = 1.96$ TeV by the CDF-Collaboration, ... 4answers 2k views ### What is needed to claim the discovery of the Higgs boson? As I understand the Higg's boson can be discovered by the LHC because the collisions are done at an energy that is high enough to produce it and because the luminosity will be high enough also. But ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932747483253479, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/20892-complex-rational-equation-im-having-trouble-print.html
# A complex rational equation I'm having trouble with Printable View • October 18th 2007, 09:41 PM saywhat A complex rational equation I'm having trouble with I've posted before about my fractal project, and this is something else that is related. The function: $f(z)=\frac{1-z}{(2-z)^{\frac{3}{2}}}$ where $|z|=1$. The derivative along this line: $f'(z)=-\frac{ie^{it}(e^{it}+1)\sqrt{2-e^{it}}}{2(2-e^{it})^{3}}$ for a new real variable $t$. I'm trying to find when the real and/or imaginary parts equal zero. $\Re\{f'\}=0$ $\Im\{f'\}=0$ Is there any other way to solve these equations without splitting $f'$ into $x'$ and $iy'$ just because it is so messy and long? I know one method of removing the imaginary terms from the bottom by multiplying top and bottom by $(\frac{1}{2}-e^{it})^3$, factoring $e^{3it}$ out of the denominator, and multiplying numerator and denominator by $\frac{1}{e^{3it}}$ resulting in a denominator of $4\cos t-5$. It's still messy with a numerator of $-\frac{i}{e^{2it}}(e^{it}+1)(\frac{1}{2}-e^{it})^3\sqrt{2-e^{it}}$ but now, algebraically, the worst part is $\sqrt{2-e^{it}}$. I'd appreciate any suggestions. • October 19th 2007, 04:46 AM topsquark Quote: Originally Posted by saywhat I've posted before about my fractal project, and this is something else that is related. The function: $f(z)=\frac{1-z}{(2-z)^{\frac{3}{2}}}$ where $|z|=1$. The derivative along this line: $f'(z)=-\frac{ie^{it}(e^{it}+1)\sqrt{2-e^{it}}}{2(2-e^{it})^{3}}$ for a new real variable $t$. I'm trying to find when the real and/or imaginary parts equal zero. $\Re\{f'\}=0$ $\Im\{f'\}=0$ Is there any other way to solve these equations without splitting $f'$ into $x'$ and $iy'$ just because it is so messy and long? I know one method of removing the imaginary terms from the bottom by multiplying top and bottom by $(\frac{1}{2}-e^{it})^3$, factoring $e^{3it}$ out of the denominator, and multiplying numerator and denominator by $\frac{1}{e^{3it}}$ resulting in a denominator of $4\cos t-5$. It's still messy with a numerator of $-\frac{i}{e^{2it}}(e^{it}+1)(\frac{1}{2}-e^{it})^3\sqrt{2-e^{it}}$ but now, algebraically, the worst part is $\sqrt{2-e^{it}}$. I'd appreciate any suggestions. The only thing I can think of is this: $\sqrt{2 - e^{it}} = \sqrt{2 - cos(t) - i~sin(t)} = \sqrt{2}\sqrt{1 - \frac{1}{2}cos(t) - \frac{i}{2}sin(t)}$ $= \sqrt{2} \left ( 1 - \frac{1}{2}cos(t) - \frac{i}{2}sin(t) \right )^{1/2}$ And then apply $(cos(\theta) + i~sin(\theta))^n = cos(n\theta) + i~sin(n \theta)$ But it's still going to be pretty messy. -Dan • October 19th 2007, 08:37 AM CaptainBlack Quote: Originally Posted by saywhat I've posted before about my fractal project, and this is something else that is related. The function: $f(z)=\frac{1-z}{(2-z)^{\frac{3}{2}}}$ where $|z|=1$. The derivative along this line: $f'(z)=-\frac{ie^{it}(e^{it}+1)\sqrt{2-e^{it}}}{2(2-e^{it})^{3}}$ for a new real variable $t$. I'm trying to find when the real and/or imaginary parts equal zero. $\Re\{f'\}=0$ $\Im\{f'\}=0$ Is there any other way to solve these equations without splitting $f'$ into $x'$ and $iy'$ just because it is so messy and long? I know one method of removing the imaginary terms from the bottom by multiplying top and bottom by $(\frac{1}{2}-e^{it})^3$, factoring $e^{3it}$ out of the denominator, and multiplying numerator and denominator by $\frac{1}{e^{3it}}$ resulting in a denominator of $4\cos t-5$. It's still messy with a numerator of $-\frac{i}{e^{2it}}(e^{it}+1)(\frac{1}{2}-e^{it})^3\sqrt{2-e^{it}}$ but now, algebraically, the worst part is $\sqrt{2-e^{it}}$. I'd appreciate any suggestions. As when $e^{it}$ goes to $2$, $\left| f'(z)|_{z=t} \right|$ goes to $\infty$, we know that this des not correspond to a root, so we can can multiply through by $2(2-e^{it})^{2.5}$ to get for a root $t$: $<br /> ie^{it}(e^{it}+1)=0<br />$ so we need: $e^{it}=-1$, or $t=\pi$ (single value as t is parameterizing the unit circle). RonL • October 19th 2007, 02:59 PM saywhat To Dan, thanks. I have gotten thus far before in other equations and have used a formula to separate into x and y, but it is so large and not worth it. To RonL, I should have specified, I'm looking for the intercept of the real axis and the intercept of the imaginary axis. That point is the cusp of the curve $f(z)$. Thanks anyway. I stumbled upon the identity $\Im \{z\}=\frac{z-\bar z}{2}$ and because $f'(t)$ is periodic $\overline{f'(t)}=-f'(-t)$ leading to $\Im \{f'(t)\}=\frac{f'(t)+f'(-t)}{2}=0$ and thus I need to solve $f'(t)=-f'(-t)$, which hopefully won't call for me to find x and y. The other equation $\Re \{f'\}=0$ can also be done in the same manor. I'll tell you how this goes and thanks again. Oh and a note, I substituted $z=e^{it}$. I should have said that in the original post. All times are GMT -8. The time now is 04:55 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 65, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461532235145569, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/210896-solve-proportions-using-cross-products.html
# Thread: 1. ## Solve Proportions using cross products My sister thinks these problems are unsolvable for x. I can't find any examples in my book. I know how to solve basic proportion problems, but these two are hard. 1. x : x-3 = x+4 : x 2. x+1 : 6 = x-1 : x Thanks! 2. ## Re: Solve Proportions using cross products Hey BrokenRitual. Try getting rid of the denominators and collect the terms. For the first one we have x^2 + 4x =x^2 - 3x or 7x = 0 => x = 0 but this is a contradiction since x is on the denominator and can't be zero. So no solution exists. You try the same kind of technique for the second one. 3. ## Re: Solve Proportions using cross products Hello, BrokenRitual! Why do you say these are "hard"? $[1]\;\;x : x-3 \:=\: x+4 : x$ We have: . $\frac{x}{x-3} \:=\:\frac{x+4}{x}$ Cross product: . $(x)(x) \:=\:(x-3)(x+4)$ n . . . . . . . . . . . . . $x^2 \:=\:x^2 + x - 12$ . . . . . . . . . . . . . . . $0 \:=\:x - 12$ . . . . . . . . . . . . . . . $x \:=\:12$ $[2]\;\; x+1 : 6 \:=\: x-1 : x$ We have: . $\frac{x+1}{6} \:=\:\frac{x-1}{x}$ Cross product: . $(x+1)(x) \:=\:6(x-1)$ n . . . . . . . . . . . . . $x^2 + x \:=\:6x-6$ . . . . . . . . . . . $x^2 - 5x + 6 \:=\:0$ . . . . . . . . . $(x-2)(x-3) \:=\:0$ . . . . . . . . . . . . . . . . . . $x \:=\:2,\:3$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7041189670562744, "perplexity_flag": "head"}
http://nrich.maths.org/5534/solution
### Roots and Coefficients If xyz = 1 and x+y+z =1/x + 1/y + 1/z show that at least one of these numbers must be 1. Now for the complexity! When are the other numbers real and when are they complex? ### Target Six Show that x = 1 is a solution of the equation x^(3/2) - 8x^(-3/2) = 7 and find all other solutions. ### 8 Methods for Three by One This problem in geometry has been solved in no less than EIGHT ways by a pair of students. How would you solve it? How many of their solutions can you follow? How are they the same or different? Which do you like best? # Root Tracker ##### Stage: 5 Challenge Level: Tiffany from Island School and Andrei from Tudor Vianu National College, Romania sent in very good solutions. (1) If you keep $p$ constant and change $q$, the graph of $$y=x^2 + px + q \quad (1)$$ is translated parallel to the y axis. Consequently the number of intersections of the graph with the x axis (i.e. of real solutions of the equation $x^2+px+q=0$) changes. This number is related to the discriminant of the equation: $\Delta = p^2 -4q \quad (2)$. - if $\Delta > 0$ there are 2 real, distinct solutions - if $\Delta = 0$ there is a repeated (or double) real solution - if $\Delta < 0$ there are no real solutions. a) For $p=-5, q=-6$: $\Delta = 49$ and the x-intercepts are (-1,0) and (6,0) showing 2 real solutions to the equation . b) For $p= -5, q= 4$: $\Delta = 9$ and the x-intercepts are (1,0) and (4,0) showing 2 real solutions. c) For $p = -5, q = 7$: $\Delta = -3$ and there are no intersection with the x axis and no real solutions. (2) If you keep $q$ constant and change $p$ the intersection of the graph of $y=x^2 + px + q$ with the y-axis is kept fixed at the point $(0,q)$. The number of intersections with the x-axis varies and it could be 0, 1 or 2 depending again on $\Delta$. From relation (2) it is easy to observe that $\Delta$ has the same value if p changes sign and that for $p^2 > 4q$ and for $p^2 < -4q$ there are 2 distinct real solutions, for $p^2 = 4q$ there are two equal real solutions and for the rest there are no real solutions. a) For $p=-10, q = 16$ the solutions are $x=2$ and $x=8$. b) For $p=-8, q=16$ the graph is tangent to the x-axis and there is a repeated real solution $x=4$. c) $p= -6,\ q=16$ there are no x-intercepts and we shall see that there are 2 complex conjugate solutions. (3) and (4) When the point $(p,q)$ is in the region below the parabola $p^2 = 4q$, showing $\Delta = p^2 - 4q > 0$, the graph of $y = x^2 + px + q$ crosses the x-axis in two distinct points and the equation $x^2 + px + q = 0$ has 2 distinct real solutions. In this case the the roots show up on the $u$-axis in the Argand Diagram (called the real axis ). When the point $(p,q)$ enters the region above the parabola $p^2 = 4q$ the graph of $y = x^2 + px + q$ no longer crosses the x-axis and the equation $x^2 + px + q = 0$ has no real solutions. In this region the movement of the point $(p,q)$ in the red frame leaves a red track showing $\Delta = p^2 - 4q < 0$. When the point $(p,q)$ is on the boundary between the two regions, that is on the parabola $p^2 = 4q$, the graph of $y = x^2 + px + q$ is tangent to the x-axis, there are 2 equal real solutions shown by two coincident points on the $u$-axis in the Argand diagram. The roots show up on the $v$-axis in the Argand Diagram (called the imaginary axis ) when $p=0$ and $q> 0$. (5) The roots of the equation $x^2 -6x +13 = 0$ are the complex numbers: $z_1 = 3 + 2i$ and $z_2 = 3 - 2i$ where $i = \sqrt {-1}$. They satisfy the Vi\'{e}te relations $z_ 1 + z_2 =6$ and $z_1 \times z_2 = 13$. Checking that $z_1^2 -6z_1 + 13 = 0$ and $z_2^2 -6z_2 + 13 = 0$ we verify that these are solutions of the equation. (7) Two complex roots of the quadratic equation $x^2 + px +q = 0$ (where $p$ and $q$ are real numbers and $p^2 < 4q$) are $$z_1 = {-p + \sqrt {p^2 - 4q}\over 2} = u_1 + iv_1 \quad {\rm and}\quad z_2 = {-p - \sqrt {p^2 - 4q}\over 2} = u_2 + iv_2.$$ We see that $u_1 = u_2 = {-p\over 2}$ and $v_1 = - v_2 = \sqrt {p^2 - 4q}$ so the points in the Argand diagram representing these solutions are reflections of each other in the real axis. The sum of the roots is $z_1 + z_2 = p$. The product of the roots is $$z_1z_2 = {-p + \sqrt {p^2 - 4q}\over 2}\times {-p^2 - \sqrt {p^2 - 4q}\over 2} = {p^2 - (p^2 - 4q)\over 4} = q.$$ The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927490770816803, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/187650/solution-for-ode-problem
# solution for ODE problem I was trying to simulate a physical system which lead me to this equation. I don't know if it has any solution or not, but I guess you guys can help me find the answer. $$v'(t) = a + s * \frac{v(t)}{|v(t)|}$$ in which t is an scalar variable, s is an scalar constant and a is a vector with same dimensions as v(t) (either 2D or 3D) - 1 If $s$ and $v$ are vectors, what kind of a product does $*$ denote? The only product that produces a vector from two vectors is the cross product, but that's usually not denoted by $*$, and also that wouldn't work in two dimensions. – joriki Aug 27 '12 at 23:17 I believe $s$ is not a vector. – Tunococ Aug 27 '12 at 23:58 @joriki s was an scalar constant as tunococ suggested, I've fixed the question. – Gajoo Aug 28 '12 at 11:10 ## 1 Answer Interpretation 1: Perhaps we can write your problem as $$\frac{dv}{dt} - \frac{s}{|v|}v=a$$ Provided $s$ is in fact a scalar. Take the dot-product with $v$ to obtain: $$v \cdot \frac{dv}{dt} - \frac{s}{|v|}v\cdot v= v \cdot a$$ Hence, $$\frac{1}{2}\frac{d}{dt} (v \cdot v )-s|v| = v \cdot a$$ One silly solution is $v(t)=v_o$ where $v_o$ is taken to fit the condition $v_o \cdot a=-s|v_o|$. I assume $s,a$ are given constants. Interpretation 2: Another solution, suppose we seek a constant speed solution then $|v|= s_o$. We face $$\frac{dv}{dt} - \frac{s}{s_o}v=a$$ Suppose $s$ is a scalar function of $t$. We can use the obvious generalization of the usual integrating factor technique (note $v$ is a vector in contrast to the usual context where the typical DEqns student faces $\frac{dy}{dt}+py=q$). Construct $\mu = exp( -\int \frac{s}{s_o} dt)$. Multiply by this integrating factor, $$\mu\frac{dv}{dt} - \frac{s}{s_o}\mu v= \mu a$$ By the product rule for a scalar function multiplying a vector, $$\frac{d}{dt}\biggl[ \mu v \biggr] = \mu a$$ Integrating, we reduce the problem to quadrature: $$v = \frac{1}{\mu} \int \mu a \, dt$$ where $\mu = exp( -\int \frac{s}{s_o} dt)$ and we choose constants of integration such that $|v|=s_o$ (if that's even possible...) - your answer makes sense but the problem is I don't know how can I use it in my application! as I said both a and s are constants given and I know initial value for $v_0$. – Gajoo Aug 28 '12 at 11:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9568761587142944, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/80077-splitting-field-l.html
# Thread: 1. ## splitting field L B is a real number Sqrt(7 + 2*sqrt(7)) basically i have done the parts of finding the minimal polynomial mu of B over Q, and all the zeros of mu. but i got stuck on the part where i need to find the degree [L :Q] and the basis for L over Q Where L is the splitting field of mu over Q can anyone help me find the degree and basis thanks 2. Originally Posted by dopi B is a real number Sqrt(7 + 2*sqrt(7)) basically i have done the parts of finding the minimal polynomial mu of B over Q, and all the zeros of mu. but i got stuck on the part where i need to find the degree [L :Q] and the basis for L over Q Where L is the splitting field of mu over Q can anyone help me find the degree and basis thanks Let $x = \sqrt{7+2\sqrt{7}} \implies x^2 - 7 = 2\sqrt{7} \implies x^4 - 14x^2+21=0$. Find the zeros of this polynomial and form the splitting field. 3. ## check solution Originally Posted by ThePerfectHacker Let $x = \sqrt{7+2\sqrt{7}} \implies x^2 - 7 = 2\sqrt{7} \implies x^4 - 14x^2+21=0$. Find the zeros of this polynomial and form the splitting field. i found the zeros as $\beta = \pm \sqrt{7 \pm 2\sqrt{7}}$ $\alpha= \pm \sqrt{7 \mp 2\sqrt{7}}$ so i tried $\alpha\beta$, where i got $\sqrt{21}$, so is the splitting field L of mu over Q = $\sqrt{7}$. is the degree[L,:Q] = 4x4 = 16 and the basis for L over Q { ${1, \sqrt{7},7, 7\sqrt{7}}$} and the order of the galois group $\Gamma(L:Q) = 8$ 4. Originally Posted by dopi i found the zeros as $\beta = \pm \sqrt{7 \pm 2\sqrt{7}}$ $\alpha= \pm \sqrt{7 \mp 2\sqrt{7}}$ so i tried $\alpha\beta$, where i got $\sqrt{21}$, so is the splitting field L of mu over Q = $\sqrt{7}$. is the degree[L,:Q] = 4x4 = 16 and the basis for L over Q { ${1, \sqrt{7},7, 7\sqrt{7}}$} and the order of the galois group $\Gamma(L:Q) = 8$ The splitting field is $\mathbb{Q}(\sqrt{7 + 2\sqrt{7}},\sqrt{7-2\sqrt{7}})$. 5. is the degree[L,:Q] = 4x4 = 16 and the basis for L over Q { ${1, \sqrt{7},7, 7\sqrt{7}}$} and the order of the galois group $\Gamma(L:Q) = 8$[/QUOTE] i was wondering are these correct?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8858636617660522, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Artificial_neural_networks
# Artificial neural network (Redirected from Artificial neural networks) This article's lead section may not adequately summarize key points of its contents. (September 2012) An artificial neural network is an interconnected group of nodes, akin to the vast network of neurons in a brain. Here, each circular node represents an artificial neuron and an arrow represents a connection from the output of one neuron to the input of another. In machine learning and computational neuroscience, an artificial neural network, often just named a neural network, is a mathematical model inspired by biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes information using a connectionist approach to computation. In most cases a neural network is an adaptive system changing its structure during a learning phase. Neural networks are used for modeling complex relationships between inputs and outputs or to find patterns in data. ## Background The inspiration for neural networks came from examination of central nervous systems. In an artificial neural network, simple artificial nodes, called "neurons", "neurodes", "processing elements" or "units", are connected together to form a network which mimics a biological neural network. There is no single formal definition of what an artificial neural network is. Generally, it involves a network of simple processing elements exhibiting complex global behavior determined by the connections between the processing elements and element parameters. Artificial neural networks are used with algorithms designed to alter the strength of the connections in the network to produce a desired signal flow. Neural networks are also similar to biological neural networks in performing functions collectively and in parallel by the units, rather than there being a clear delineation of subtasks to which various units are assigned. The term "neural network" usually refers to models employed in statistics, cognitive psychology and artificial intelligence. Neural network models which emulate the central nervous system are part of theoretical neuroscience and computational neuroscience. In modern software implementations of artificial neural networks, the approach inspired by biology has been largely abandoned for a more practical approach based on statistics and signal processing. In some of these systems, neural networks or parts of neural networks (like artificial neurons) form components in larger systems that combine both adaptive and non-adaptive elements. While the more general approach of such adaptive systems is more suitable for real-world problem solving, it has far less to do with the traditional artificial intelligence connectionist models. What they do have in common, however, is the principle of non-linear, distributed, parallel and local processing and adaptation. Historically, the use of neural networks models marked a paradigm shift in the late eighties from high-level (symbolic) artificial intelligence, characterized by expert systems with knowledge embodied in if-then rules, to low-level (sub-symbolic) machine learning, characterized by knowledge embodied in the parameters of a dynamical system. ## Models Neural network models in artificial intelligence are usually referred to as artificial neural networks (ANNs); these are essentially simple mathematical models defining a function $\scriptstyle f : X \rightarrow Y$ or a distribution over $\scriptstyle X$ or both $\scriptstyle X$ and $\scriptstyle Y$, but sometimes models are also intimately associated with a particular learning algorithm or learning rule. A common use of the phrase ANN model really means the definition of a class of such functions (where members of the class are obtained by varying parameters, connection weights, or specifics of the architecture such as the number of neurons or their connectivity). ### Network function See also: Graphical models The word network in the term 'artificial neural network' refers to the inter–connections between the neurons in the different layers of each system. An example system has three layers. The first layer has input neurons, which send data via synapses to the second layer of neurons, and then via more synapses to the third layer of output neurons. More complex systems will have more layers of neurons with some having increased layers of input neurons and output neurons. The synapses store parameters called "weights" that manipulate the data in the calculations. An ANN is typically defined by three types of parameters: 1. The interconnection pattern between different layers of neurons 2. The learning process for updating the weights of the interconnections 3. The activation function that converts a neuron's weighted input to its output activation. Mathematically, a neuron's network function $\scriptstyle f(x)$ is defined as a composition of other functions $\scriptstyle g_i(x)$, which can further be defined as a composition of other functions. This can be conveniently represented as a network structure, with arrows depicting the dependencies between variables. A widely used type of composition is the nonlinear weighted sum, where $\scriptstyle f (x) = K \left(\sum_i w_i g_i(x)\right)$, where $\scriptstyle K$ (commonly referred to as the activation function[1]) is some predefined function, such as the hyperbolic tangent. It will be convenient for the following to refer to a collection of functions $\scriptstyle g_i$ as simply a vector $\scriptstyle g = (g_1, g_2, \ldots, g_n)$. ANN dependency graph This figure depicts such a decomposition of $\scriptstyle f$, with dependencies between variables indicated by arrows. These can be interpreted in two ways. The first view is the functional view: the input $\scriptstyle x$ is transformed into a 3-dimensional vector $\scriptstyle h$, which is then transformed into a 2-dimensional vector $\scriptstyle g$, which is finally transformed into $\scriptstyle f$. This view is most commonly encountered in the context of optimization. The second view is the probabilistic view: the random variable $\scriptstyle F = f(G)$ depends upon the random variable $\scriptstyle G = g(H)$, which depends upon $\scriptstyle H=h(X)$, which depends upon the random variable $\scriptstyle X$. This view is most commonly encountered in the context of graphical models. The two views are largely equivalent. In either case, for this particular network architecture, the components of individual layers are independent of each other (e.g., the components of $\scriptstyle g$ are independent of each other given their input $\scriptstyle h$). This naturally enables a degree of parallelism in the implementation. Two separate depictions of the recurrent ANN dependency graph Networks such as the previous one are commonly called feedforward, because their graph is a directed acyclic graph. Networks with cycles are commonly called recurrent. Such networks are commonly depicted in the manner shown at the top of the figure, where $\scriptstyle f$ is shown as being dependent upon itself. However, an implied temporal dependence is not shown. ### Learning What has attracted the most interest in neural networks is the possibility of learning. Given a specific task to solve, and a class of functions $\scriptstyle F$, learning means using a set of observations to find $\scriptstyle f^{*} \in F$ which solves the task in some optimal sense. This entails defining a cost function $\scriptstyle C : F \rightarrow \mathbb{R}$ such that, for the optimal solution $\scriptstyle f^*$, $\scriptstyle C(f^*) \leq C(f)$ $\scriptstyle \forall f \in F$ - i.e., no solution has a cost less than the cost of the optimal solution (see Mathematical optimization). The cost function $\scriptstyle C$ is an important concept in learning, as it is a measure of how far away a particular solution is from an optimal solution to the problem to be solved. Learning algorithms search through the solution space to find a function that has the smallest possible cost. For applications where the solution is dependent on some data, the cost must necessarily be a function of the observations, otherwise we would not be modelling anything related to the data. It is frequently defined as a statistic to which only approximations can be made. As a simple example, consider the problem of finding the model $\scriptstyle f$, which minimizes $\scriptstyle C=E\left[(f(x) - y)^2\right]$, for data pairs $\scriptstyle (x,y)$ drawn from some distribution $\scriptstyle \mathcal{D}$. In practical situations we would only have $\scriptstyle N$ samples from $\scriptstyle \mathcal{D}$ and thus, for the above example, we would only minimize $\scriptstyle \hat{C}=\frac{1}{N}\sum_{i=1}^N (f(x_i)-y_i)^2$. Thus, the cost is minimized over a sample of the data rather than the entire data set. When $\scriptstyle N \rightarrow \infty$ some form of online machine learning must be used, where the cost is partially minimized as each new example is seen. While online machine learning is often used when $\scriptstyle \mathcal{D}$ is fixed, it is most useful in the case where the distribution changes slowly over time. In neural network methods, some form of online machine learning is frequently used for finite datasets. See also: Mathematical optimization, Estimation theory, and Machine learning #### Choosing a cost function While it is possible to define some arbitrary, ad hoc cost function, frequently a particular cost will be used, either because it has desirable properties (such as convexity) or because it arises naturally from a particular formulation of the problem (e.g., in a probabilistic formulation the posterior probability of the model can be used as an inverse cost). Ultimately, the cost function will depend on the desired task. An overview of the three (3) main categories of learning tasks is provided below. ### Learning paradigms There are three major learning paradigms, each corresponding to a particular abstract learning task. These are supervised learning, unsupervised learning and reinforcement learning. #### Supervised learning In supervised learning, we are given a set of example pairs $\scriptstyle (x, y), x \in X, y \in Y$ and the aim is to find a function $\scriptstyle f : X \rightarrow Y$ in the allowed class of functions that matches the examples. In other words, we wish to infer the mapping implied by the data; the cost function is related to the mismatch between our mapping and the data and it implicitly contains prior knowledge about the problem domain. A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output, f(x), and the target value y over all the example pairs. When one tries to minimize this cost using gradient descent for the class of neural networks called multilayer perceptrons, one obtains the common and well-known backpropagation algorithm for training neural networks. Tasks that fall within the paradigm of supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). The supervised learning paradigm is also applicable to sequential data (e.g., for speech and gesture recognition). This can be thought of as learning with a "teacher," in the form of a function that provides continuous feedback on the quality of solutions obtained thus far. #### Unsupervised learning In unsupervised learning, some data $\scriptstyle x$ is given and the cost function to be minimized, that can be any function of the data $\scriptstyle x$ and the network's output, $\scriptstyle f$. The cost function is dependent on the task (what we are trying to model) and our a priori assumptions (the implicit properties of our model, its parameters and the observed variables). As a trivial example, consider the model $\scriptstyle f(x) = a$, where $\scriptstyle a$ is a constant and the cost $\scriptstyle C=E[(x - f(x))^2]$. Minimizing this cost will give us a value of $\scriptstyle a$ that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between $\scriptstyle x$ and $\scriptstyle f(x)$, whereas in statistical modeling, it could be related to the posterior probability of the model given the data. (Note that in both of those examples those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering. #### Reinforcement learning In reinforcement learning, data $\scriptstyle x$ are usually not given, but generated by an agent's interactions with the environment. At each point in time $\scriptstyle t$, the agent performs an action $\scriptstyle y_t$ and the environment generates an observation $\scriptstyle x_t$ and an instantaneous cost $\scriptstyle c_t$, according to some (usually unknown) dynamics. The aim is to discover a policy for selecting actions that minimizes some measure of a long-term cost; i.e., the expected cumulative cost. The environment's dynamics and the long-term cost for each policy are usually unknown, but can be estimated. More formally, the environment is modeled as a Markov decision process (MDP) with states $\scriptstyle {s_1,...,s_n}\in S$ and actions $\scriptstyle {a_1,...,a_m} \in A$ with the following probability distributions: the instantaneous cost distribution $\scriptstyle P(c_t|s_t)$, the observation distribution $\scriptstyle P(x_t|s_t)$ and the transition $\scriptstyle P(s_{t+1}|s_t, a_t)$, while a policy is defined as conditional distribution over actions given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the policy that minimizes the cost; i.e., the MC for which the cost is minimal. ANNs are frequently used in reinforcement learning as part of the overall algorithm. Dynamic programming has been coupled with ANNs (Neuro dynamic programming) by Bertsekas and Tsitsiklis[2] and applied to multi-dimensional nonlinear problems such as those involved in vehicle routing or natural resources management because of the ability of ANNs to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of the original control problems. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks. See also: dynamic programming and stochastic control ### Learning algorithms Training a neural network model essentially means selecting one model from the set of allowed models (or, in a Bayesian framework, determining a distribution over the set of allowed models) that minimizes the cost criterion. There are numerous algorithms available for training neural network models; most of them can be viewed as a straightforward application of optimization theory and statistical estimation. Most of the algorithms used in training artificial neural networks employ some form of gradient descent. This is done by simply taking the derivative of the cost function with respect to the network parameters and then changing those parameters in a gradient-related direction. Evolutionary methods,[3] gene expression programming,[4] simulated annealing,[5] expectation-maximization, non-parametric methods and particle swarm optimization[6] are some commonly used methods for training neural networks. See also: machine learning ## Employing artificial neural networks Perhaps the greatest advantage of ANNs is their ability to be used as an arbitrary function approximation mechanism that 'learns' from observed data. However, using them is not so straightforward and a relatively good understanding of the underlying theory is essential. • Choice of model: This will depend on the data representation and the application. Overly complex models tend to lead to problems with learning. • Learning algorithm: There are numerous trade-offs between learning algorithms. Almost any algorithm will work well with the correct hyperparameters for training on a particular fixed data set. However selecting and tuning an algorithm for training on unseen data requires a significant amount of experimentation. • Robustness: If the model, cost function and learning algorithm are selected appropriately the resulting ANN can be extremely robust. With the correct implementation, ANNs can be used naturally in online learning and large data set applications. Their simple implementation and the existence of mostly local dependencies exhibited in the structure allows for fast, parallel implementations in hardware. ## Applications The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations. This is particularly useful in applications where the complexity of the data or task makes the design of such a function by hand impractical. ### Real-life applications The tasks artificial neural networks are applied to tend to fall within the following broad categories: • Function approximation, or regression analysis, including time series prediction, fitness approximation and modeling. • Classification, including pattern and sequence recognition, novelty detection and sequential decision making. • Data processing, including filtering, clustering, blind source separation and compression. • Robotics, including directing manipulators, Computer numerical control. Application areas include system identification and control (vehicle control, process control, natural resources management), quantum chemistry,[7] game-playing and decision making (backgammon, chess, poker), pattern recognition (radar systems, face identification, object recognition and more), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications (automated trading systems), data mining (or knowledge discovery in databases, "KDD"), visualization and e-mail spam filtering. Artificial neural networks have also been used to diagnose several cancers. An ANN based hybrid lung cancer detection system named HLND improves the accuracy of diagnosis and the speed of lung cancer radiology.[8] These networks have also been used to diagnose prostate cancer. The diagnoses can be used to make specific models taken from a large group of patients compared to information of one given patient. The models do not depend on assumptions about correlations of different variables. Colorectal cancer has also been predicted using the neural networks. Neural networks could predict the outcome for a patient with colorectal cancer with a lot more accuracy than the current clinical methods. After training, the networks could predict multiple patient outcomes from unrelated institutions.[9] ### Neural networks and neuroscience Theoretical and computational neuroscience is the field concerned with the theoretical analysis and computational modeling of biological neural systems. Since neural systems are intimately related to cognitive processes and behavior, the field is closely related to cognitive and behavioral modeling. The aim of the field is to create models of biological neural systems in order to understand how biological systems work. To gain this understanding, neuroscientists strive to make a link between observed biological processes (data), biologically plausible mechanisms for neural processing and learning (biological neural network models) and theory (statistical learning theory and information theory). #### Types of models Many models are used in the field defined at different levels of abstraction and modeling different aspects of neural systems. They range from models of the short-term behavior of individual neurons, models of how the dynamics of neural circuitry arise from interactions between individual neurons and finally to models of how behavior can arise from abstract neural modules that represent complete subsystems. These include models of the long-term, and short-term plasticity, of neural systems and their relations to learning and memory from the individual neuron to the system level. ## Neural network software Main article: Neural network software Neural network software is used to simulate, research, develop and apply artificial neural networks, biological neural networks and, in some cases, a wider array of adaptive systems. ## Types of artificial neural networks Main article: Types of artificial neural networks Artificial neural network types vary from those with only one or two layers of single direction logic, to complicated multi–input many directional feedback loops and layers. On the whole, these systems use algorithms in their programming to determine control and organization of their functions. Some may be as simple as a one-neuron layer with an input and an output, and others can mimic complex systems such as dANN, which can mimic chromosomal DNA through sizes at the cellular level, into artificial organisms and simulate reproduction, mutation and population sizes.[10] Most systems use "weights" to change the parameters of the throughput and the varying connections to the neurons. Artificial neural networks can be autonomous and learn by input from outside "teachers" or even self-teaching from written-in rules. ## Theoretical properties ### Computational power The multi-layer perceptron (MLP) is a universal function approximator, as proven by the Cybenko theorem. However, the proof is not constructive regarding the number of neurons required or the settings of the weights. Work by Hava Siegelmann and Eduardo D. Sontag has provided a proof that a specific recurrent architecture with rational valued weights (as opposed to full precision real number-valued weights) has the full power of a Universal Turing Machine[11] using a finite number of neurons and standard linear connections. They have further shown that the use of irrational values for weights results in a machine with super-Turing power.[citation needed] ### Capacity Artificial neural network models have a property called 'capacity', which roughly corresponds to their ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity. ### Convergence Nothing can be said in general about convergence since it depends on a number of factors. Firstly, there may exist many local minima. This depends on the cost function and the model. Secondly, the optimization method used might not be guaranteed to converge when far away from a local minimum. Thirdly, for a very large amount of data or parameters, some methods become impractical. In general, it has been found that theoretical guarantees regarding convergence are an unreliable guide to practical application.[citation needed] ### Generalization and statistics In applications where the goal is to create a system that generalizes well in unseen examples, the problem of over-training has emerged. This arises in convoluted or over-specified systems when the capacity of the network significantly exceeds the needed free parameters. There are two schools of thought for avoiding this problem: The first is to use cross-validation and similar techniques to check for the presence of overtraining and optimally select hyperparameters such as to minimize the generalization error. The second is to use some form of regularization[disambiguation needed]. This is a concept that emerges naturally in a probabilistic (Bayesian) framework, where the regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting. Confidence analysis of a neural network Supervised neural networks that use an MSE cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of the output of the network, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified. By assigning a softmax activation function on the output layer of the neural network (or a softmax component in a component-based neural network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is very useful in classification as it gives a certainty measure on classifications. The softmax activation function is: $y_i=\frac{e^{x_i}}{\sum_{j=1}^c e^{x_j}}$ ### Dynamic properties This article needs attention from an expert in Technology. Please add a reason or a talk parameter to this template to explain the issue with the article. WikiProject Technology (or its Portal) may be able to help recruit an expert. (November 2008) Various techniques originally developed for studying disordered magnetic systems (i.e., the spin glass) have been successfully applied to simple neural network architectures, such as the Hopfield network. Influential work by E. Gardner and B. Derrida has revealed many interesting properties about perceptrons with real-valued synaptic weights, while later work by W. Krauth and M. Mezard has extended these principles to binary-valued synapses. ## Disadvantages One drawback to using artificial neural networks, particularly in robotics, is that they require a large diversity of training for real-world operation. A. K. Dewdney, a former Scientific American columnist, wrote in 1997, "Although neural nets do solve a few toy problems, their powers of computation are so limited that I am surprised anyone takes them seriously as a general problem-solving tool." (Dewdney, p. 82) Arguments for Dewdney's position are that to implement large and effective software neural networks, much processing and storage resources need to be committed. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a most simplified form on Von Neumann technology may compel a NN designer to fill many millions of database rows for its connections - which can lead to excessive RAM and HD necessities. Furthermore, the designer of NN systems will often need to simulate the transmission of signals through many of these connections and their associated neurons - which must often be matched with incredible amounts of CPU processing power and time. While neural networks often yield effective programs, they too often do so at the cost of time and monetary efficiency. Arguments against Dewdney's position are that neural nets have been successfully used to solve many complex and diverse tasks, ranging from autonomously flying aircraft[12] to detecting credit card fraud.[13] Technology writer Roger Bridgman commented on Dewdney's statements about neural nets: Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource". In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.[14] Some other criticisms came from believers of hybrid models (combining neural networks and symbolic approaches). They advocate the intermix of these two approaches and believe that hybrid models can better capture the mechanisms of the human mind (Sun and Bookman 1994). ## Successes in Pattern Recognition Contests since 2009 Between 2009 and 2012, the recurrent neural networks and deep feedforward neural networks developed in the research group of Jürgen Schmidhuber at the Swiss AI Lab IDSIA have won eight international competitions in pattern recognition and machine learning.[15] For example, the bi-directional and multi-dimensional Long short term memory (LSTM)[16][17] of Alex Graves et al. won three competitions in connected handwriting recognition at the 2009 International Conference on Document Analysis and Recognition (ICDAR), without any prior knowledge about the three different languages to be learned. Fast GPU-based implementations of this approach by Dan Ciresan and colleagues at IDSIA have won several pattern recognition contests, including the IJCNN 2011 Traffic Sign Recognition Competition,[18] the ISBI 2012 Segmentation of Neuronal Structures in Electron Microscopy Stacks challenge,[19] and others. Their neural networks also were the first artificial pattern recognizers to achieve human-competitive or even superhuman performance[20] on important benchmarks such as traffic sign recognition (IJCNN 2012), or the famous MNIST handwritten digits problem of Yann LeCun at NYU. Deep, highly nonlinear neural architectures similar to the 1980 Neocognitron by Kunihiko Fukushima[21] and the "standard architecture of vision"[22] can also be pre-trained by unsupervised methods[23][24] of Geoff Hinton's lab at Toronto University. A team from this lab won a 2012 contest sponsored by Merck to design software to help find molecules that might lead to new drugs.[25] ## Gallery • A single-layer feedforward artificial neural network. Arrows originating from $\scriptstyle x_2$ are omitted for clarity. There are p inputs to this network and q outputs. There is no activation function (or equivalently, the activation function is $\scriptstyle g(x)=x$). In this system, the value of the qth output, $\scriptstyle y_q$ would be calculated as $\scriptstyle y_q = \sum(x_i*w_{iq})$ • A two-layer feedforward artificial neural network. ## References 1. 2. Bertsekas, D.P., Tsitsiklis, J.N. (1996). Neuro-dynamic programming. Athena Scientific. p. 512. ISBN 1-886529-10-8. 3. de Rigo, D., Castelletti, A., Rizzoli, A.E., Soncini-Sessa, R., Weber, E. (January 2005). "A selective improvement technique for fastening Neuro-Dynamic Programming in Water Resources Network Management". In Pavel Zítek. Proceedings of the 16th IFAC World Congress - IFAC-PapersOnLine. 16. 16th IFAC World Congress. Prague, Czech Republic: IFAC. doi:10.3182/20050703-6-CZ-1902.02172. ISBN 978-3-902661-75-3. Retrieved 2011-12-30. 4. 5. Da, Y., Xiurun, G. (July 2005). "An improved PSO-based ANN with simulated annealing technique". In T. Villmann. New Aspects in Neurocomputing: 11th European Symposium on Artificial Neural Networks. Elsevier. doi:10.1016/j.neucom.2004.07.002. Retrieved 2011-12-30. 6. Wu, J., Chen, E. (May 2009). "A Novel Nonparametric Regression Ensemble for Rainfall Forecasting Using Particle Swarm Optimization Technique Coupled with Artificial Neural Network". In Wang, H., Shen, Y., Huang, T., Zeng, Z.. 6th International Symposium on Neural Networks, ISNN 2009. Springer. doi:10.1007/978-3-642-01513-7_6. ISBN 978-3-642-01215-0. Retrieved 2012-01-01. 7. Roman M. Balabin, Ekaterina I. Lomakina (2009). "Neural network approach to quantum-chemistry data: Accurate prediction of density functional theory energies". 131 (7): 074104. doi:10.1063/1.3206326. PMID 19708729. 8. 9. 10. "DANN:Genetic Wavelets". dANN project. Archived from the original on 21 August 2010. Retrieved 12 July 2010. 11. Siegelmann, H.T.; Sontag, E.D. (1991). "Turing computability with neural nets". Appl. Math. Lett. 4 (6): 77–80. doi:10.1016/0893-9659(91)90080-F. 12. "NASA NEURAL NETWORK PROJECT PASSES MILESTONE". NASA. Retrieved 12 July 2010. 13. "Counterfeit Fraud" (PDF). VISA. p. 1. Retrieved 12 July 2010. "Neural Networks (24/7 Monitoring):" 14. 2012 Kurzweil AI Interview with Jürgen Schmidhuber on the eight competitions won by his Deep Learning team 2009-2012 15. Graves, Alex; and Schmidhuber, Jürgen; Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552 16. D. Ciresan, A. Giusti, L. Gambardella, J. Schmidhuber. Deep Neural Networks Segment Neuronal Membranes in Electron Microscopy Images. In Advances in Neural Information Processing Systems (NIPS 2012), Lake Tahoe, 2012. 17. K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4): 93-202, 1980. 18. [[Geoff Hinton|Hinton, G. E.]]; Osindero, S.; Teh, Y. (2006). "A fast learning algorithm for deep belief nets". 18 (7): 1527–1554. doi:10.1162/neco.2006.18.7.1527. PMID 16764513. 28. Boddhu, Sanjay K., John C. Gallagher, and Saranyan A. Vigraham. "A commercial off-the-shelf implementation of an analog neural computer." International Journal on Artificial Intelligence Tools 17.02 (2008): 241-258. ## Bibliography • Bhadeshia H. K. D. H. (1999). "Neural Networks in Materials Science". ISIJ International 39 (10): 966–979. doi:10.2355/isijinternational.39.966. • Bishop, C.M. (1995) Neural Networks for Pattern Recognition, Oxford: Oxford University Press. ISBN 0-19-853849-9 (hardback) or ISBN 0-19-853864-2 (paperback) • Cybenko, G.V. (1989). Approximation by Superpositions of a Sigmoidal function, Mathematics of Control, Signals, and Systems, Vol. 2 pp. 303–314. electronic version • Duda, R.O., Hart, P.E., Stork, D.G. (2001) Pattern classification (2nd edition), Wiley, ISBN 0-471-05669-3 • Egmont-Petersen, M., de Ridder, D., Handels, H. (2002). "Image processing with neural networks - a review". Pattern Recognition 35 (10): 2279–2301. doi:10.1016/S0031-3203(01)00178-9. • Gurney, K. (1997) An Introduction to Neural Networks London: Routledge. ISBN 1-85728-673-1 (hardback) or ISBN 1-85728-503-4 (paperback) • Haykin, S. (1999) Neural Networks: A Comprehensive Foundation, Prentice Hall, ISBN 0-13-273350-1 • Fahlman, S, Lebiere, C (1991). The Cascade-Correlation Learning Architecture, created for National Science Foundation, Contract Number EET-8716324, and Defense Advanced Research Projects Agency (DOD), ARPA Order No. 4976 under Contract F33615-87-C-1499. electronic version • Hertz, J., Palmer, R.G., Krogh. A.S. (1990) Introduction to the theory of neural computation, Perseus Books. ISBN 0-201-51560-1 • Lawrence, Jeanette (1994) Introduction to Neural Networks, California Scientific Software Press. ISBN 1-883157-00-5 • Masters, Timothy (1994) Signal and Image Processing with Neural Networks, John Wiley & Sons, Inc. ISBN 0-471-04963-8 • Ness, Erik. 2005. SPIDA-Web. Conservation in Practice 6(1):35-36. On the use of artificial neural networks in species taxonomy. • Ripley, Brian D. (1996) Pattern Recognition and Neural Networks, Cambridge • Siegelmann, H.T. and Sontag, E.D. (1994). Analog computation via neural networks, Theoretical Computer Science, v. 131, no. 2, pp. 331–360. electronic version • Sergios Theodoridis, Konstantinos Koutroumbas (2009) "Pattern Recognition", 4th Edition, Academic Press, ISBN 978-1-59749-272-0. • Smith, Murray (1993) Neural Networks for Statistical Modeling, Van Nostrand Reinhold, ISBN 0-442-01310-8 • Wasserman, Philip (1993) Advanced Methods in Neural Computing, Van Nostrand Reinhold, ISBN 0-442-00461-3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 64, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8920965790748596, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/96620/flatness-for-family-of-hypersurfaces/96837
## Flatness for family of hypersurfaces ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X \to Y$ be a family of hypersurfaces in a constant $\mathbb{P}^n$, i.e. $X \subset Y \times \mathbb{P}^n$ is locally on $Y$ given by one equation of degree $d$ in $\mathbb{P}^n$. Is $X \to Y$ automatically flat? I know that it is so if $Y$ is reduced, since in this case the fact that the Hilbert polynomial of $X_y$ is constant on $Y$ implies that the family is actually flat. So is $X \to Y$ still automatically flat when $Y$ is nonreduced? - ## 3 Answers (Of course, you have the implicit assumption that the equation of degree $d$ is not $0$.) The answer is yes. In the case where $Y$ is locally Noetherian, it is true by the "slicing criterion for flatness on the source", as $\mathbb{P}^n_Y \rightarrow Y$ is flat. See Exercise 25.6.F in the May 12 2012 version of http://math216.wordpress.com/2011-12-course/ . Your special case is essentially Cor. 2 on p. 152 of Matsumura's "Commutative Algebra". To get to the general case, use the general technique that finitely presented morphisms (as yours is!) can (locally on the target) be pulled back from the Noetherian situation (see Exercise 10.3.G in the notes linked to above); but this may be more than you care to know. - I think you also need to assume that the single equation (local on $Y$) remains a single equation of degree $d$ in each fiber; otherwise, $X$ could be the blowup of $Y = \mathbb{A}^2$ at a point. (I think this was also implicit in the question, since this example has $Y$ reduced.) – Charles Staats May 10 2012 at 23:35 Another reference is Lemma 9.3.4 in "FGA explained". – Olivier Benoist May 11 2012 at 6:36 Very helpful, thank you so much. – OldMacdonaldHadaForm May 11 2012 at 6:48 May I ask if a similar result holds for the Hilbert scheme of points? Namely, if a family of $0$-dimensional subschemes of $Y \times \mathbb{P}^n$ has constant Hilbert polynomial $k$, is it automatically flat over $Y$? – OldMacdonaldHadaForm May 13 2012 at 6:54 1 That's true over a reduced base. But not if the base is nonreduced: consider the case k=1 and n=0, over a base Spec K[t]/(t^2), where you take the family to just be over the reduced point. I can't think of how to state a hypothesis that gets around this that isn't essentially "flat" (perhaps in its guise of locally free). – Ravi Vakil May 14 2012 at 4:00 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is an example when proving "locally free" instead of merely "flat" is easier and more straightforward, and no Noetherian assumption on the base is needed. The point is that if some coefficient $a$ of a polynomial $f\in R[x_1,\dotsc x_n]$ is nonzero at $p\in Spec R$ (i.e. nonzero in $R/p$) then it is invertible in an open neighborhood $D(a)\ni p$. So let $f\in R[x_1,\dotsc,x_n]$ be a polynomial of degree $d$, $p$ be a point of $Spec R$ (i.e. a prime ideal in $R$) and $k$ be the quotient field of $R/p$. Let $\bar f \in k[x_1,\dotsc,x_n]$ be the reduction of $f$ modulo $p$. Using a change of coordinates in $k[x_1,\dotsc,x_n]$, put $\bar f$ in a Weierstrass form w.r.t. to the variable $x_n$. This means that $$\bar f= \bar a x_n^d + p_{d-1}x_n^{d-1} + \dots + p_0$$ for some polynomials $p_j$ in the remaining variables $x_1,\dotsc,x_{n-1}$, and $\bar a\in k$, $\bar a\ne 0$. If $k$ is infinite, this can be done by a linear change of coordinates. If $k$ is finite, there is a little trick. If $r_i/s_i\in k$ are the coefficients involved in the change of coordinates ($r_i,s_i\in R$) then this change of coordinates can be done already in the ring $R'=R[1/a \prod s_i]$, i.e. over the open set $Spec R'= D(a\prod s_i)$ in $Spec R$ containing $[p]$. Further, $a$ is invertible over this set. Now, over $R'$ the quotient $R'[x_1,\dotsc,x_n]/(f)$ is a free $R'[x_1,\dotsc,x_{n-1}]$-module with a basis $1,x_n,\dotsc, x_n^{d-1}$. Hence, it is a free $R'$-module. QED This proves the statement for a family of nonzero hypersurfaces in $\mathbb A^n$. For a family of nonzero hypersurfaces in $\mathbb P^n$, cover $\mathbb P^n$ by $\mathbb A^n$ appropriately. - This is very nice, and certainly the way to go (rather than invoking a harder theorem with more restrictive hypotheses, as I did). +1! – Ravi Vakil May 14 2012 at 4:00 This is so very beautifully concrete! May I ask for a reference for the "general" Weierstrass form? I know it only over $\mathbb{C}$ (or in characteristic zero). – OldMacdonaldHadaForm May 16 2012 at 8:59 Thank you very much for your answer. – OldMacdonaldHadaForm May 16 2012 at 9:00 Sorry for the previous question. I think I understand now. – OldMacdonaldHadaForm May 17 2012 at 10:33 Consider the projective space of degree $d$ monomials in $n+1$ variables $\mathbb{P}^{\binom{n+d}{d}-1}$. On this projective space there is a universal family of hypersurfaces in $\mathbb{P}^n$. This family is flat since the Hilbert polynomial is constant, and the base is reduced. Now given any family $X \to Y$ of not necessarily flat hypersurfaces of degree $d$ in $\mathbb{P}^n$, there exists a unique moduli map $Y \to \mathbb{P}^{\binom{n+d}{d}-1}$ such that $X\to Y$ is the pullback of the universal family via the moduli map. Since flatness is stable under base change, the map $X \to Y$ must also be flat even when $Y$ is not reduced. NOTE: I suspect that this answer may contain a mistake due to some sort of "circular" reasoning. At the moment I can not see this mistake though. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351475834846497, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/99789/list
## Return to Answer 2 added 6 characters in body Every even perfect number is of the form $2^{p-1}(2^p - 1)$ where $2^p - 1$ is a (Mersenne) prime. Note that $p$ must be prime – if $p = ab$ with $a, b > 1$ then $2^p$2^p - 1 = 2^{ab} - 1 = (2^a)^b - 1 = (2^a - 1)(1 + 2^a + 2^{2a} + \dots\ + 2^{a(b-1)})$.2^{a(b-1)}).$\$ If $p = 2$, we obtain the first perfect number $6$ which satisfies $6 \equiv 1\ (\text{mod 5})$. Every other prime is odd, so let $p = 2k + 1$. Then $2^{p-1}(2^p$2^{p-1}(2^p - 1) = 2^{2k}(2^{2k+1} - 1) = 2.2^{4k} - 2^{2k} = 2.16^k - 4^k \equiv 2 - (-1)^k\ (\text{mod 5})$.5}).$\$ So, for $p = 2k + 1$, $2^{p-1}(2^p$2^{p-1}(2^p - 1) \equiv \begin{cases} 1 \ (\text{mod 5}) & \text{if }k\text{ is even}\newline 3 \ (\text{mod 5}) & \text{if }k\text{ is odd}. \end{cases}$end{cases}$\$ 1 Every even perfect number is of the form $2^{p-1}(2^p - 1)$ where $2^p - 1$ is a (Mersenne) prime. Note that $p$ must be prime – if $p = ab$ with $a, b > 1$ then $2^p - 1 = 2^{ab} - 1 = (2^a)^b - 1 = (2^a - 1)(1 + 2^a + 2^{2a} + \dots\ + 2^{a(b-1)})$. If $p = 2$, we obtain the first perfect number $6$ which satisfies $6 \equiv 1\ (\text{mod 5})$. Every other prime is odd, so let $p = 2k + 1$. Then $2^{p-1}(2^p - 1) = 2^{2k}(2^{2k+1} - 1) = 2.2^{4k} - 2^{2k} = 2.16^k - 4^k \equiv 2 - (-1)^k\ (\text{mod 5})$. So, for $p = 2k + 1$, $2^{p-1}(2^p - 1) \equiv \begin{cases} 1 \ (\text{mod 5}) & \text{if }k\text{ is even}\newline 3 \ (\text{mod 5}) & \text{if }k\text{ is odd}. \end{cases}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7293450236320496, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/101952/determination-of-rationality-and-computing-a-rational-parametrization/101977
## Determination of rationality and computing a rational parametrization ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose I have a hypersurface in $\mathbb{C}P^n$ given by some $f(z_1, \dots, z_{n+1}) = 0.$ Is there an algorithm which returns a rational parametrization if there is one, and "not rational" otherwise? - 1 I do not think so – Francesco Polizzi Jul 11 at 13:31 @Francesco: the lack of such algorithm would not shock me, but even heuristics would be nice... – Igor Rivin Jul 11 at 14:10 Do you want an algorithm, or just want to know whether one exists? i.e. whether or not the problem is decidable? – Daniel Loughran Jul 11 at 16:16 @Daniel to be honest, it did not occur to me that the problem was not decidable. Do you think it might be? – Igor Rivin Jul 11 at 17:06 ## 2 Answers For smooth cubics in $\mathbb P^5$ this is unknown. That is, there are certain explicit families of such cubics that are known to be rational (those that admit a Pfaffian description, for example) but beyond these the problem of rationality for cubic $4$-folds is a famous unsolved problem. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here is my attempt at a heuristic as to why the problem should be undecidable. Suppose we have a hypersurface $X$ of dimension $n$ and we wish to decide whether or not it is rational. I will assume that $n\geq2$. Then giving a rational map $\mathbb{P}^n \dashrightarrow X$ is the same as giving a $\mathbb{C}(t_1,\ldots,t_n)$-vauled point on $X$. However, "Hilbert's 10th problem" for such function fields is undecidable (see http://www.math.psu.edu/eisentra/varieties.pdf). Hence the problem you have asked for is undecidable. Edit: As noted in the comments, this reasoning is not quite correct as for Hilbert's 10th problem we fix $m$ and a field $\mathbb{C}(t_1,\ldots,t_m)$, then allow the dimension $n$ to vary. Hence why it is only a heuristic! Note that for Hilbert's 10th problem, the case $\mathbb{C}(t)$ is still open. Edit: As remarked below, rationality for curves is decidable. One just needs to compute the genus of the normalisation of the projective closure of the curve. - 1 You can find an embedding of the normalization in projective space using blow-ups, say. This will give you a differential, say $dx/x - dy/y$. Counting the zeroes and poles of the differential computes the genus. I don't see any problems. – Will Sawin Jul 11 at 18:24 1 Yes sorry I should have said that rationality is decidable for curves. I guess the thing which is not known to be decidable is whether or not a variety contains a rational curve. I will edit my answer. – Daniel Loughran Jul 11 at 18:47 Dear Daniel, maybe I am nitpicking a little bit, but to me undecidability of Hilbert's tenth problem for function fields does not seem to imply undecidability of the OP's question; HTP asks for an algorithm that works for all hypersurfaces, whereas in the OP's question the dimension of the hypersurface is fixed at the outset. (So the distinction is the same as that between decidability of rationality for curves, and decidability of existence of a rational curve in a variety.) – Artie Prendergast-Smith Jul 11 at 19:21 @Artie: Hmm I see your point. I will leave my answer up anyway as hopefully it at least gives a heuristic as to why the problem might be undecidable. – Daniel Loughran Jul 11 at 19:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9449194669723511, "perplexity_flag": "head"}
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Gromov's_theorem_on_groups_of_polynomial_growth
All Science Fair Projects Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. Gromov's theorem on groups of polynomial growth In mathematics, Gromov's theorem on groups of polynomial growth, named for Mikhail Gromov, characterizes finitely generated groups of polynomial growth, as those groups which have nilpotent subgroups of finite index. The growth rate of a group is a well-defined notion from asymptotic analysis. To say that a finitely generated group has polynomial growth means the number of elements of length (relative to a symmetric generating set) at most n is bounded above by a polynomial function p(n). The order of growth is then the degree of the polynomial function p. A nilpotent group G is a group with a lower central series, terminating in the identity subgroup. Gromov's theorem states that a finitely generated group has polynomial growth if and only if it has a nilpotent subgroup that is of finite index. There is a vast literature on growth rates, leading up to Gromov's theorem. An earlier result of Hyman Bass showed that if G is a finitely generated nilpotent group, then the group has polynomial growth. Let G be a finitely generated nilpotent group with lower central series $G = G_1 \supseteq G_2 \supseteq \ldots$ In particular, the quotient group Gk/Gk+1 is a finitely generated abelian group. Bass's theorem states that the order of polynomial growth of G is $d(G) = \sum_{k \geq 1} k \ \operatorname{rank}(G_k/G_{k+1})$ where: rank denotes the rank of an abelian group, i.e. the largest number of independent and torsion-free elements of the abelian group. In particular, Gromov's and Bass's theorems imply that the order of polynomial growth of a finitely generated group is always either an integer or infinity (excluding for example, fractional powers). In order to prove this theorem Gromov introduced a convergence for metric spaces. This convergence, now called the Gromov-Hausdorff convergence is currently widely used in geometry. References • H. Bass, The degree of polynomial growth of finitely generated nilpotent groups, Proceedings London Mathematical Society, vol 25(3), 1972 • M. Gromov, Groups of Polynomial growth and Expanding Maps, Publications mathematiques I.H.É.S., 53, 1981 03-10-2013 05:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8540037274360657, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/191772-question-about-convex-set.html
# Thread: 1. ## Question about convex set So i know the definition of a convex set: Set S is convex then tx+(1-t)y belongs to S for all 0<t<1 I have to decide whether the following set is convex: {(x,y); x-y<=1} I started with tx+(1-t)y = t(x-y)+y <= t+y then I am stuck. edit: hm I think I might have been mixing up the the points that I choose (x,y) with the variabels in x-y<=1... I redid it with matrices and this is what i came up with: (1 -1)(x1,x2) <= 1 , c = (1 -1) cx <= 1 cy <= 1 => c(tx + (1-t)y) = tcx+(1-t)cy <= t + 1 -t = 1 Is this the correct way to solve the problem? 2. ## Re: Question about convex set Originally Posted by MagisterMan So i know the definition of a convex set: Set S is convex then tx+(1-t)y belongs to S for all 0<t<1 I have to decide whether the following set is convex: {(x,y); x-y<=1} I started with tx+(1-t)y = t(x-y)+y <= t+y then I am stuck. edit: hm I think I might have been mixing up the the points that I choose (x,y) with the variables in x-y<=1... Yes, that is exactly what you were doing! In this problem, the set $S = \{(x,y): x-y\leqslant1\}$ is a set of points in two-dimensional space. The definition of convexity says that you have to take two points in S. Call them $(x_1,y_1)$ and $(x_2,y_2)$. The fact that these points are in S tells you that $x_1-y_1\leqslant1$ and $x_2-y_2\leqslant1.$ To see whether S is convex, you have to check whether the point $t(x_1,y_1) + (1-t)(x_2,y_2) = \bigl(tx_1+(1-t)x_2,ty_1+(1-t)y_2\bigr)$ is in S. The condition for that is $\bigl(tx_1+(1-t)x_2\bigr) - \bigl(ty_1+(1-t)y_2\bigr) \leqslant1.$ So you need to check whether that condition follows from what you are given.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9480598568916321, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/potential-energy?page=2&sort=newest&pagesize=15
# Tagged Questions Potential energy is the energy of a body or a system due to the position of the body or the arrangement of the particles of the system. 1answer 165 views ### speed of sound and the potential energy of an ideal gas; Goldstein derivation I am looking the derivation of the speed of sound in Goldstein's Classical Mechanics (sec. 11-3, pp. 356-358, 1st ed). In order to write down the Lagrangian, he needs the kinetic and potential ... 3answers 251 views ### What is the most efficient machine for translating gravitational potential energy of one mass into kinetic energy of a different mass? As the question states, what is our current best machine for translating falling gravitational potential energy, such as a large weight, into launching a smaller projectile vertically? A lever? A ... 8answers 980 views ### Why are L4 and L5 lagrangian points stable? This diagram from wikipedia shows the gravitational potential energy of the sun-earth two body system, and demonstrates clearly the semi-stability of the L1, L2, and L3 lagrangian points. The blue ... 1answer 147 views ### Why do hydrogen atoms attract? That is, why is the potential energy with the orbitals overlapping less than with the Hydrogen atoms 'independent'. Similarly, why is a noble gas configuration stabler than if an electron were to be ... 1answer 137 views ### In $\textbf{f} = -\boldsymbol{\nabla} u$, what is $u$? I know that force is the negative gradient of the potential: $$\textbf{f} = -\boldsymbol{\nabla} u$$ where force $\textbf{f}$ is a vector and $u$ is a scalar. This is a relatively soft question, ... 2answers 85 views ### Can a mechanical systems on hold be switched off, in another way than just letting it do it's thing? Can the value of the potential energy, which is responsible for driving the system, diminish in time, while the system itself is stationary during that time? Can there be dissipation in a system, ... 1answer 194 views ### why is total electron energy of an electron in metal negative? In my textbook, it says that any electron bound in metals, modelled as some potential well $U$, has negative total electron energy, as shown below in the figure. Why is the total electron energy ... 4answers 417 views ### what is the 2D gravity potential? In 3D, I can calculate the total force due to gravity acting on a point on the surface of the unit sphere of constant density, where I choose units so that all physical constants (as well as the ... 4answers 664 views ### Does the mass of an object change as it moves away from the earth? The mass of a helium nucleus is less than the mass of two isolated protons and two isolated neutrons. When the component hadrons are assembled, this mass is lost as energy ($E=mc^2$). This makes it ... 1answer 670 views ### Rubber Band Forces I have a question regarding the force a band places on an object. Say I have a rubber band wrapped around 2 pegs at a certain distance, and at that distance I know the pounds of force per inch it is ... 1answer 135 views ### Atomic weight in respect to the binding energy? My book says that the weight of helium (with the nucleon number of 4 and proton of 2) is that of $6,6447*10^{-27}$ kg. Earlier the book stated that if the proton number is left out it means that the ... 2answers 391 views ### Electricity & Magnetism - Is an electric field infinite? The inverse square law for an electric field is: $$E = \frac{Q}{4\pi\varepsilon_{0}r^2}$$ Here: $$\frac{Q}{\varepsilon_{0}}$$ is the source strength of the charge. It is the point charge divided ... 3answers 397 views ### Electrostatic Potential Energy I have read many books on Mechanics and Electrodynamics and the one thing that has confused me about electrostatic potential energy is its derivation .One of the classical derivations is : ... 5answers 996 views ### Direction of rotation of proton in magnetic field--opposite to a dipole Chatroom created by @pcr for discussing this: http://chat.stackexchange.com/rooms/2824/direction-of-rotation-of-proton-in-magnetic-field Here's a small paradoxical question I was asked a long ... 3answers 540 views ### Still trying to understand gravitational potential and Poisson's equation? A week or so back I asked a question about the gravitational potential field $$\phi=\frac{-Gm}{r}, \qquad r\neq 0,$$ and how to show the Laplacian of $\phi$ equals zero for $r\neq 0$? Eventually, ... 2answers 160 views ### Why no basis vector in Newtonian gravitational vector field? In my textbook, the gravitational field is given by$$\mathbf{g}\left(\mathbf{r}\right)=-G\frac{M}{\left|\mathbf{r}\right|^{2}}e_{r}$$ which is a vector field. On the same page, it is also given as a ... 1answer 406 views ### Trying to understand Laplace's equation I'm struggling here so please excuse if I'm writing nonsense. I understand that the gravitational potential field, a scalar field, is given by $$\phi=\frac{-Gm}{r}$$ where $\phi$ is the ... 1answer 162 views ### Asynchronous generator run in vacuum chamber what will be happen if we put asynchronous generator in vacuum chamber & run it above its synchronous speed. After reaching its over synchronous speed we will cut off electrical supply. Can it run ... 3answers 280 views ### storing energy (as mass) When chemical energy is released mass is reduced, if only by a negligible amount. Presumably that's true for all energy. And presumably that works in reverse as well: storing energy involves an ... 2answers 138 views ### Differences In Potential Equations Could someone please describe the differences between the uses each of these potential equations: Potential due to a point charge: \$V = \frac{k \cdot q}{r} - \frac{k \cdot ... 1answer 81 views ### Terminology question about energy I'm looking for the appropriate term to use for what gets "used up" as potential energy is converted to heat and work. For example, some of the the energy in solar radiation is converted by ... 1answer 277 views ### Potential energy in a gravitational field I've seen the following formula for the potential energy of a body in a gravitational field ($\rho$ is the density, $g$ is the gravitational acceleration): $$\rho g \int_E z dV$$ Can you please ... 3answers 315 views ### About constructing potential energy functions There are many classical systems with different potential functions. My problem is that I do not understand how one can construct a certain potential function for a certain system. Are there any ... 1answer 160 views ### Violation of conservation of energy and potential energy between objects I would like to clarify my question. I have numbered them to be independent questions For any conservative fields, $\vec{F} = -\nabla U$. Which means the restoring force is opposite to the ... 3answers 207 views ### What is the physical reason a $+5V$ equipotential coutour cannot intersect a $-5V$ equipotential coutour? Now I've been told that equipotential contours with different values can never intersect. That is, if one level is 5V and one is -5V, they can't intersect. This make sense to me mathematically (one ... 2answers 293 views ### Does the potential energy of fluid rising on a string change? Lets say I have a glass of water at rest. Then I go and hang a string above the water (vertically), such as the end of the string is immersed in the water. Over time some of the water is going to ... 2answers 462 views ### Is there really no meaning in potential energy and potential? I have been told all my physics life that potential energy between two mass/charge has no meaning and only their difference has meaning. The same goes for electric potential, only the difference ... 2answers 220 views ### Could someone remind me what this means again? $\nabla U = \pm F$ You know that for a potential function (conservative force/fields) that $\nabla U = \pm \vec{F}$ In math, we don't have that minus sign, we have only the plus one. What does it mean if you get rid ... 0answers 412 views ### Word problem - Potential/Elastic Energy [closed] A man (weighing $86\space Kg$) is climbing a mountain when he suddenly falls. His security rope works like a spring with spring constant $1.2 > \times \space 10^3 N/m$ and after a fall of ... 1answer 4k views ### Why is gravitational potential energy negative, and what does that mean? I usually think of gravitational potential energy as representing just what it sounds like: the energy that we could potentially gain, using gravity. However, the equation for it (derived by ... 2answers 610 views ### Potential energy of a charged ring Consider a ring of radius $R$, and charge density $\rho$. What will be the potential energy of the ring in its self field? The best I can do: $$dq = \rho R \cdot \, d \alpha$$ E_p = 2 \pi R \cdot ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333859086036682, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/14459/are-there-pairs-of-consecutive-integers-with-the-same-sum-of-factors
## Background/Motivation I was planning to explain Ruth-Aaron pairs to my son, but it took me a few moments to remember the definition. Along the way, I thought of the mis-definition, a pair of consecutive numbers with the same sum of divisors. Well, that's actually two definitions, depending on whether you are looking only at proper divisors. Suppose all divisors. I quickly found (14,15) which both have a divisor sum (sigma function) of 24. Some more work provided (206,207) and then a search on OEIS gave sequence A002961. What about proper divisors only? (2,3) comes quickly, but then nothing for a while. Noting that the parity of this value ($sigma (n) -n$) is the same as that of n unless n is a square or twice a square, any solution pair must include one number of that form. With that much information in hand, I posted this problem at the reference desk on Wikipedia. User PrimeHunter determined that there were no solutions up to $10^{12}$, but there were no general responses. Aside from the parity issue, I haven't found other individual constraints that would filter the candidates--the number of adjacent values identical modulo p for other small primes is at least as great as would be expected by chance, and there are a fair number of pairs that are arithmetically close. Other than (2,3), are there pairs of consecutive integers such that $sigma(n)-n = sigma(n+1)-(n+1)$? - "Noting that 2|(sigma(n)-n) unless n is a square or twice a square". This is nonsense, right? (assuming sigma(n) is the sum of the divisors of n...) – Kevin Buzzard Feb 7 2010 at 16:22 aah you mean 2|sigma(n) unless... – Kevin Buzzard Feb 7 2010 at 16:24 Yes, I was getting ahead of myself a bit. Rephrased to emphasize the point I was trying to make. Thanks, Kevin. – Alan Frank Feb 7 2010 at 17:01 You've now edited the post to say " the parity of sigma (n) is the same as that of n unless n is a square or twice a square", and this is still wrong (try n=3). Either that or I've misunderstood what you mean by sigma(n) (which you also didn't define, but which usually means the sum of (all) divisors of n). – Kevin Buzzard Feb 7 2010 at 18:28 @Alan: Instead of saying "the parity of this value (sigma(n)-n) is the same as that of n", why not simply say "2|sigma(n)" as Kevin suggested? – Bjorn Poonen Feb 9 2010 at 6:28 show 1 more comment ## 2 Answers You should look at Carl Pomerance's followup paper: Ruth-Aaron pairs revisited, http://www.math.dartmouth.edu/~carlp/PDF/paper130.pdf . In his first paper with Erdos they proved a result which showed that the number of RA pairs had asymptotic density 0, but just barely. In the followup Pomerance shows that the the sum of the reciprocals converges (which is muchj stronger). - Don't both of these follow easily for the problem at hand, given the observation that at least one of the pairs must be a square or twice a square? Then the density of such integers up to $N$ must be $\ll \sqrt{N}$, so the sum of reciprocals converges by partial summation. – Thomas Bloom Feb 6 2012 at 16:07 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The question can be rephrased as asking for sigma(n + 1) = sigma(n) + sigma(1), in line with the "Freshman's Dream." - I was hoping to see something about this in Guy's Unsolved Problems In Number Theory, but no such luck. In B13, he discusses sigma(n + 2) = sigma(n) + 2 (other than twin primes, there are only three solutions under 200,000,000). Perhaps the reference P Haukkanen in Math Student 62 (1993) 166-168 will say something, although the review 94j:11006 is not encouraging. B15 in Guy is about sigma(q) + sigma(r) = sigma(q + r) but the discussion does not head in the direction of the question at hand. By the way, I'm the same Gerry as above, just using a different computer. – Gerry Myerson Feb 7 2010 at 22:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513877630233765, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/15946-simple-question.html
# Thread: 1. ## simple question What is the sum of (2 to the n-1)/(3 to the n) where n goes from 1 to infinity? In other words what is the sum of 1/3 + 2/9 + 4/27 + 8/81 ... 2. Originally Posted by shadow1 What is the sum of (2 to the n-1)/(3 to the n) where n goes from 1 to infinity? In other words what is the sum of 1/3 + 2/9 + 4/27 + 8/81 ... Hello, use the sum formula for infinite geometric series: $\sum_1^{\infty}\left( \frac{2^{n-1}}{3^n} \right) = \sum_1^{\infty}\left( \frac{2^{n-1}}{3^n} \right) = \sum_1^{\infty}\left( \frac{1}{2} \cdot \left(\frac{2}{3} \right)^n \right) = \frac{\frac{1}{3}}{1-\frac{2}{3}} = 1$ 3. Thanks earboth. So what you are saying is, if I have a pie and cut it into three pieces and ate one of them, I would have two one-third pieces left. Then if I cut each of those one-third pieces into three pieces and ate one, I would have four one-ninth pieces. And if I continue cutting each piece into threes and eating one of them, I will eventually eat the entire pie. Right????? 4. Originally Posted by shadow1 Thanks earboth. So what you are saying is, if I have a pie and cut it into three pieces and ate one of them, I would have two one-third pieces left. Then if I cut each of those one-third pieces into three pieces and ate one, I would have four one-ninth pieces. And if I continue cutting each piece into threes and eating one of them, I will eventually eat the entire pie. Right????? Hello, that's it. I would recommend this method as a diet because you eat an infinite amount of pieces in an endless time and finally and actually you ate only one pizza
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9751765727996826, "perplexity_flag": "middle"}
http://nrich.maths.org/7244&part=
nrich enriching mathematicsSkip over navigation ### Football World Cup Simulation A maths-based Football World Cup simulation for teachers and students to use. ### Probably ... You'll need to work in a group for this problem. The idea is to decide, as a group, whether you agree or disagree with each statement. ### In the Playground What can you say about the child who will be first on the playground tomorrow morning at breaktime in your school? # Winning the Lottery ##### Stage: 2 Challenge Level: In a far-away land, the lottery consists of four balls numbered $1$ to $4$, which are placed in a bag. To enter, you choose one number. To win, your number must match the number that is drawn from the bag. What is the chance of winning this lottery? The people running the lottery in this far-away land decide that it is too easy to win.  So, they change their lottery game. In the new lottery, there are still four balls numbered $1$ to $4$, which are placed in a bag. Now, to enter, you choose two numbers. To win, your numbers must match (in any order) the two numbers that are drawn from the bag. What is the chance of winning this new lottery? Have the organisers made it harder to win compared with their original version? Can you create your own version of the lottery which would also be harder to win than the first game? How do you know that your game is harder? You may like to use our lottery simulator to try out your ideas: This text is usually replaced by the Flash movie. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948676586151123, "perplexity_flag": "middle"}
http://mathhelpforum.com/number-theory/94967-pythagorean-triplets.html
# Thread: 1. ## Pythagorean Triplets Given x^2 + y^2 = t^2 and t^2 + z^2 = w^2 where (x,y,t)=(t,z,w)=1 and t,x,y,z,w>0 Show that given one solution to the equation a person can always select r,s,u, and v from this solution that will result in a new solution which has (x,y,t)=(t,z,w)=1. 2. Anyone out there able to help at all with Pythagorean Triples? I really need to understand these and my textbook doesn't help at all. Make up your own example to explain if it helps. 3. Where are r,s,t,u coming from. Do you mean given a solution to $x^2+y^2=t^2$ and $t^2+z^2=w^2$ Find another solution, r,s,u,v,p from this solution such that: $r^2+s^2=u^2$ and $u^2+v^2=p^2$ with $(r,s,u)=(u,v,p)=1$ 4. I think you have it right lancekam. But that is all my book tells me. There is nothing else. The only info in the chapter is the proof of pythagorean triples. I am not sure how they expect you to read it and then apply it to other stuff that isn't immediately similar. Do you have any ideas? 5. Originally Posted by lancekam Where are r,s,t,u coming from. Do you mean given a solution to $x^2+y^2=t^2$ and $t^2+z^2=w^2$ Find another solution, r,s,u,v,p from this solution such that: $r^2+s^2=u^2$ and $u^2+v^2=p^2$ with $(r,s,u)=(u,v,p)=1$ An interesting identity: . $(x^2+y^2)(t^2+z^2) \;=\;\begin{Bmatrix}(xt-yz)^2 + (xz + yt)^2 \\ (xt+yz)^2 + (xz-yt)^2 \end{Bmatrix}$ Does this help? 6. Not really because I don't know where to go or what to do with it. 7. This made no sense to me initially. What are r,s,u,v? Are they supposed to be the numbers that you plug into the usual formual for getting Pythagorean triples and are we supposed to pick r,s,u,v from four of x,y,z,t,w. If so you should say so. It is easy to see that if the triples (x,y,t) and (t,z,w) are both proper Pythagorean triples then we have t=a^2+b^2 from some (a,b)=1 and also t=c^2-d^2 for some (c,d)=1 Surely t=1 mod 4 which means that c is odd and d is even Let's see. 3,4,5 comes from (2,1) with 3=2^2-1^2,4=2*2*a,5=2^2+1^1 5,12,13 comes from (3,2) with 5=3^2-2^2,12=2*2*3,13=3^2+2^2 (4,3) gives us the triple (7,24,25) (13,12) gives us the triple (25,312,313) (24,7) gives us the triple 527,336,625 (313,312) gives us some triple with 625. OK this starts to seem plausible. Now we had t = (a^2+b^2) = (c^2-d^2) We are going to pick 2*a*b and a^2-b^2 as the generators of the first of the new triples, and 2*c*d and c^2+d^2 as the generators of the second triple First we need to show that (2*a*b)^2 + (a^2-b^2)^2 = (c^2+d^2)^2-(2*c*d)^2 LHS is 4*a^2*b^2+a^4-2*a^2*b^2+b^4 = (a^2+b^2)^2 RHS is c^4+2*c^2*d^2+d^4-4*c^2*d^2 = (c^2-d^2)^2 So clearly we have LHS=RHS as both are t^2 Also since both pairs of numbers we picked meet the condition to generate triples, (i.e. one odd and one even number with hcf=1) then we have a new pair of triples of the required form. 8. Any other explanation on triples. For example if $x^2+y^2=130$ How would you determine that x=7 and y=9? 9. Well then I suggest you do my suggestion above and consider reduction mod 10 is probably the easiest. Also it may be of use to you to know that every Pythagorean Triple is generated as follows. Let $m,n\in \mathbb{N}$ with $m<n$. Then $(m^2-n^2,2mn,m^2+n^2)$ is a Pythagorean Triple. Of course this method takes a while and yields lots of non primitive ones. I suggest memorizing the squares of integers up to 20 or 25 before the exam, it could save you a lot of time. Take your c^2 and subtract some squares from it and see if you recognize it as a perfect square, that is how I did it when you sent me the question the first time.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306036233901978, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/35185/defining-entanglement-in-subspaces-of-tensor-product
# Defining entanglement in subspaces of tensor product I have asked the question in math.stackexchange, but perhaps it should be more relevant here. Hence I am re-posting it with necessary reediting. Let $\mathcal{H}=\mathbb{C}^n$ be a Hilbert space. A state $\rho\in\mathcal{B(H)}$ is a positive semi-definite operator with unit trace. $\rho\in \mathcal{B(H)}$, where $\mathcal{H}=\mathcal{H}_1\otimes\mathcal{H}_2=\mathbb{C}^{n_1}\otimes\mathbb{C}^{n_2}$, is called entangled, if it can not be written as convex sum of one dimensional projections - like $P_x\otimes P_y$, where $|x\rangle\in\mathcal{H}_1$ and $|y\rangle\in\mathcal{H}_2$. In a similar spirit can we define entanglement in the symmetric space $\mathcal{H}\bigvee\mathcal{H}$ and antisymmetric space $\mathcal{H}\bigwedge\mathcal{H}$? As you have already understood, I am looking for entanglement in the indistinguishable particles. Hence the above definition for the entanglement of distinguishable particles does not work. A quick googling gave me a few papers, which refer to different definitions (for multipartite systems, and some are based on certain entropic conditions). Hence I want to know whether there is mathematical definitions for entanglement in such systems which is closest to the spirit of the definition mentioned above (for distinguishable particle). Advanced thanks for any help in this direction. - What properties do you want entangled/unentangled systems of bosons or fermions to have? – Peter Shor Aug 30 '12 at 5:25 @PeterShor It may sound stupid, but actually I did not consider any particular property of bosons and fermions. I wanted to see the problem of entanglement witness (for mixed states) for the indistinguishable particles and construct an approach by using positive but not completely positive maps. – rsg Aug 31 '12 at 5:14 ## 1 Answer For symmetric or antisymmetric tensor product, the most useful definition of an unentangled state is a state of the form $$|\psi\rangle = \sum_{p} (-1)^{2J\cdot |p|} \prod^\otimes_i |\psi_{p(i)}\rangle$$ which is just a convoluted symbolic expression for the totally symmetrized (for bosons) or totally antisymmetrized (for fermions) tensor product. The sum goes over all permutations $i\mapsto p(i)$. The power of $(-1)$ is $1$ for bosons i.e. $j\in {\mathbb Z}$ while it is $(-1)^{|p|}$, i.e. the sign of the permutation, for fermions with $j\in {\mathbb Z}+1/2$. The product is a tensor product. All states that can't be written in this form are entangled. If we kept the usual definition of unentangled states as "strict tensor products", there would be too few unentangled states because most multi-boson states and almost all multi-fermion states refuse to be strict tensor products. So the definition above makes the form of unentangled states more general. However, it may be too general for various purposes. In general, the tensor factors $|\psi_{p(i)}\rangle$ should correspond to states where objects are localized in different regions of space, otherwise the state that is unentangled according to the definition above could be entangled "morally". In particular, in quantum field theory, unentangled states may be obtained as $$A^\dagger B^\dagger C^\dagger \dots |0\rangle$$ where $A^\dagger$ and others are polynomials in creation operators but $A^\dagger,B^\dagger,C^\dagger$ are only made of creation operators acting on disjoint, non-overlapping regions. The actual wave function of the state will have to be symmetrized or antisymmetrized so it won't be a strict tensor product but the state of the form above will still enjoy many features of unentangled states. The discussion about the non-overlapping regions may get important and subtle because, for example, the simple singlet state of two spins, $|up\rangle |down\rangle - |down\rangle |up\rangle$, is entangled according to the normal definition but it could end up being unentangled because it's an antisymmetrized tensor product. However, it usually only makes sense to call such a state unentangled if the up-spinning and down-spinning electron are also located at different positions. If they share the same location, the strict singlet state should be called entangled in all conventions. - Lubos: why should this be called an unentangled state? From general principles, you would expect a thermal state to be unentangled. According to your definition, it's not. – Peter Shor Aug 30 '12 at 20:59 Dear @Peter Shor, I only discussed pure states. Thermal states are mixed states and this answer of mine hasn't attempted to discuss which of those are entangled. It could actually be easier and more natural to define entangled states - even in the presence of identical particles - for mixed states. The cleanest general definition would be that unentangled are those whose degrees of freedom may be divided to regions and all the probabilities $P(A=A_i,B=B_i)$ of combined properties referring to regions a,b factorize. – Luboš Motl Aug 31 '12 at 5:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228827357292175, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/51827/range-of-forces-from-mass-of-force-carrier
# Range of forces from mass of force carrier? Why is $\frac{\hbar}{mc}$ a good estimate of the range of the four forces, where $m$ is the mass of the carrier particle of the force? Inputting the pion mass gives $1.4\ \mathrm{fm}$ for the hadronic force. I have one book that tries to justify the formula by the uncertainty principle and another book that says that the argument is bogus. So does this follow from the uncertainty principle? - 4 – Michael Brown Jan 22 at 0:34 Thanks ill accept this as answer. – fisk Jan 22 at 7:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9274347424507141, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/178785-computing-volume-higher-dimensional-polynomials-4th-5th-etc.html
# Thread: 1. ## Computing the volume of higher dimensional polynomials? (4th, 5th, etc) So, I'm working on a problem set and I'm asked to "compute the volume of the 4th dimensional unit hypersphere x^2 + y^2 + z^2 + w^2 =1" and then after that it goes on to ask me to do the same for 5th, 6th, and 7th dimensional hyperspheres. ( 5th: x^2 + y^2 + z^2 + w^2 + v^2 =1 6th: x^2 + y^2 + z^2 + w^2 + v^2 + s^2 = 1 7th: x^2 + y^2 + z^2 + w^2 + v^2 + s^2 + t^2 =1 ). Actually computing the volume of these integrals won't be too hard, since I can use the program Maple, but it's the limits that I'm confused about. What would be the limits to the integrals? How do I figure that out? 2. Originally Posted by Rumor So, I'm working on a problem set and I'm asked to "compute the volume of the 4th dimensional unit hypersphere x^2 + y^2 + z^2 + w^2 =1" and then after that it goes on to ask me to do the same for 5th, 6th, and 7th dimensional hyperspheres. ( 5th: x^2 + y^2 + z^2 + w^2 + v^2 =1 6th: x^2 + y^2 + z^2 + w^2 + v^2 + s^2 = 1 7th: x^2 + y^2 + z^2 + w^2 + v^2 + s^2 + t^2 =1 ). Actually computing the volume of these integrals won't be too hard, since I can use the program Maple, but it's the limits that I'm confused about. What would be the limits to the integrals? How do I figure that out? Alright lets start in $\mathbb{R}^2$ and see if we can find a pattern The "volume" here will be the area of the unit circle $x^2+y^2=1 \iff y=\pm\sqrt{1-x^2}$ So if we integrate out the y's the projection onto the x axis is $[-1,1]$ So this gives the integral $\int_{-1}^{1}\int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}}dydx$ Now in three dimensions we get $x^2+y^2+z^2=1 \iff z=\pm\sqrt{1-x^2+y^2}$ But now the projection of the sphere into the xy plane is the unit circle! so we get one new piece and all of the stuff from before $\int_{-1}^{1}\int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}}\int_{-\sqrt{1-x^2+y^2}}^{\sqrt{1-x^2+y^2}}dzdydx$ I bet you can guess what the next one will be $x^2+y^2+z^2+w^2=1 \iff w=\pm\sqrt{1-x^2-y^2-z^2}$ Now this projection will be the unit sphere in $\mathbb{R}^3$ $\int_{-1}^{1}\int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}}\int_{-\sqrt{1-x^2+y^2}}^{\sqrt{1-x^2+y^2}}\int_{-\sqrt{1-x^2-y^2-z^2}}^{\sqrt{1-x^2-y^2-z^2}}dwdzdydx$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237054586410522, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/44726/the-analogy-between-temperature-and-imaginary-time
# The analogy between temperature and imaginary time There are many statements about the relation between time and temperature in statistical physics and quantum field theory, the basic idea is to interpret (inverse) temperature in statistics as "time" in quantum field theory. So the thermal fluctuation is kind of quantum fluctuation in quantum mechanics. However, if I continue to think about this analog when include gravity, I feel hard to imagine the energy momentum tensor in term of temperature instead of time. Could you physicists give me some hint about that? - ## 2 Answers I think your formulation of the analogy between temperature and time in QFT may be confusing you. (Your second question about gravity doesn't really make any sense.) The idea is to interpret temperature as "duration in imaginary time". You seem to be thinking of it as something more like "direction in time". More precisely: Suppose you compute the expectation value $Z[O]$ of an observable $O$ using the path integral in a universe where time is periodic with period $P$. You can vary $P$, so you can think of this expectation value as a function of $P$. If you set $P = -i\hbar/kT$, you will discover that the path integral formula transforms into the formula for expectation value with respect to the Boltzmann distribution associated to the Euclidean action. - Oh, thank you for reminding me of that. I did mess these two concept up. Now it seems clear now. – Yingfei Gu Nov 21 '12 at 0:23 However, can I still imagine changing the temperature and see what the partition function will be, in this sense, it seems equivalent to change the imaginary "time" duration in a quantum system. Then, if we look at the evolution of the system, the "time" seems to be defined? – Yingfei Gu Nov 21 '12 at 0:34 In a covariant relativistic setting, you need to replace time $t$ by the spacetime position 4-vector $x$, energy $H$ by the energy-momentum 4-vector $P$, and the temperature by a temperature 4-vector $\beta$. In place of the unitary map $e^{-itH/\hbar}$ you get $e^{-ix\cdot P/\hbar}$, and in place of the the canonical density matrix $e^{-\beta H}$ you get $e^{-\beta\cdot P}$. Then analytic continuation works as in the nonrelativistic case. In general relativity, things are more complicated as temperature becomes a field. Moreover it is not really clear how to do statistical mechanics as quantization itself is an unsolved problem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473448991775513, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/122617-derivate-f-x-x-2-e-3x.html
# Thread: 1. ## Derivate f(x)=x^2*e*(-3x) Hi guys I've been trying to solve this for a little while now but I feel kinda lost and I thought you could maybe help me out. so f(x)=x^2*e^(-3x) f '(x)=? Thank you. 2. Originally Posted by Blacklaw Hi guys I've been trying to solve this for a little while now but I feel kinda lost and I thought you could maybe help me out. so f(x)=x^2*e^(-3x) f '(x)=? Thank you. $<br /> f(x) = g(x)h(x)$, then $f'(x) = g'(x)h(x) + g(x)h'(x)$ take $g(x) = x^2$ and $h(x) = e^{-3x}$ 3. Originally Posted by dedust $<br /> f(x) = g(x)h(x)$, then $f'(x) = g'(x)h(x) + g(x)h'(x)$ take $g(x) = x^2$ and $h(x) = e^{-3x}$ I get $2x*e^{-3x}+x^2*e^{-3x}$ But I know for a fact that the correct answer is $2x*e^{-3x}+x^2*e^{-3x}*(-3)$ Please explain how I get the (-3)? 4. Originally Posted by Blacklaw I get $2x*e^{-3x}+x^2*e^{-3x}$ But I know for a fact that the correct answer is $2x*e^{-3x}+x^2*e^{-3x}*(-3)$ Please explain how I get the (-3)? $\frac{d}{dx}e^{-3x}=-3e^{-3x}$ You did not know the answer for a fact if it was what was in the back of the book. You know it for a fact if you know why it is so and now you should do. CB 5. Originally Posted by CaptainBlack $\frac{d}{dx}e^{-3x}=-3e^{-3x}$ You did not know the answer for a fact if it was what was in the back of the book. You know it for a fact if you know why it is so and now you should do. CB Does it always go like that? I thought the derivative of $e^x$ was $e^x$? Thank you for you help, I truly appreciate it. 6. Originally Posted by Blacklaw Does it always go like that? That's what the chain rule says. Just in case a picture helps... is the chain rule. Straight continuous lines differentiate downwards (integrate up) with respect to x, and the straight dashed line similarly but with respect to the dashed balloon expression (the inner function of the composite which is subject to the chain rule). In this case... PS: Just in case another picture doesn't hurt (and in view of comments below)... Wrapping the chain rule inside the (legs-uncrossed version of) ... ... the product rule, we get... __________________________________________ Don't integrate - balloontegrate! Balloon Calculus; standard integrals, derivatives and methods Balloon Calculus Drawing with LaTeX and Asymptote! 7. I think I get it now.. Thank you very much for helping me out. 8. $f(x) = x^2 e^{-3x}$ $f'(x) = x^2 e^{-3x} (-3) + e^{-3x} 2x$ By chain rule $f'(x) = x e^{-3x} [ -3x + 2 ]$ Hope this helps. 9. Originally Posted by rshekhar.in $f(x) = x^2 e^{-3x}$ $f'(x) = x^2 e^{-3x} (-3) + e^{-3x} 2x$ By chain rule $f'(x) = x e^{-3x} [ -3x + 2 ]$ Hope this helps. What you label as "by chain riue" is mainly product rule and to a lesser extent chain rule CB 10. Originally Posted by Blacklaw Does it always go like that? I thought the derivative of $e^x$ was $e^x$? Thank you for you help, I truly appreciate it. The derivative of $e^x$ is $e^x$ but that is not what you have, what you should know is that the derivative of $e^{ax}$ is $ae^{ax}$ by the chain rule if you must, but you should know it because it will occur so often. CB #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9663773775100708, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/194398-easiest-way-factor-quadratic-equations-polynomials.html
# Thread: 1. ## Easiest way to factor quadratic equations/polynomials I'm told that the "British" method is the most foolproof without having to continuously guess+check, and is very logical because I have trouble finding common factors and stuff in my head. So for this problem: x^2 + 11x + 12 I multiply 12 and 1 as per the method, and I get 12. Now I need to find two numbers that are multiples of 12 that add up to 11, and I can't find this. So is there another different method that I can definitely get the answer with, foolproof without having to guess and check multiples as well? And what if ax^n, bx, and c are huge numbers? It'd be even more difficult using either British method or finding the GCF, right? And what if b or c is 0? Then the British method won't work at all. 2. ## Re: Easiest way to factor quadratic equations/polynomials Yes, indeed so maybe the quadratic formula can be useful. 3. ## Re: Easiest way to factor quadratic equations/polynomials Hi daigo! The standard method to solve a quadratic equation is by use of the quadratic formula. The solution of $ax^2+bx+c=0$ is $x={-b \pm \sqrt{b^2 - 4ac} \over 2a}$ The part under the square root $b^2 - 4ac$ is called the discriminant. If it is negative there are no solutions. 4. ## Re: Easiest way to factor quadratic equations/polynomials Ah, thanks. We didn't learn the quadratic equation yet, I just jumped to a bunch of questions while working on the problems that didn't require the quadratic formula I guess. 5. ## Re: Easiest way to factor quadratic equations/polynomials In fact, most quadratics cannot be factored with integer (or rational) coefficients. 6. ## Re: Easiest way to factor quadratic equations/polynomials Originally Posted by daigo I'm told that the "British" method is the most foolproof without having to continuously guess+check, and is very logical because I have trouble finding common factors and stuff in my head. So for this problem: x^2 + 11x + 12 I multiply 12 and 1 as per the method, and I get 12. Now I need to find two numbers that are multiples of 12 that add up to 11, and I can't find this. So is there another different method that I can definitely get the answer with, foolproof without having to guess and check multiples as well? And what if ax^n, bx, and c are huge numbers? It'd be even more difficult using either British method or finding the GCF, right? And what if b or c is 0? Then the British method won't work at all. If this method you describe fails, then to factorise the quadratic (if it factorises, not all quadratics do) you need to complete the square, then, if possible, use the Difference of Two Squares. $\displaystyle \begin{align*} x^2 + 11x + 12 &= x^2 + 11x + \left(\frac{11}{2}\right)^2 - \left(\frac{11}{2}\right)^2 + 12 \\ &= \left(x + \frac{11}{2}\right)^2 - \frac{121}{4} + \frac{48}{4} \\ &= \left(x + \frac{11}{2}\right)^2 - \frac{73}{4} \\ &= \left(x + \frac{11}{2}\right)^2 - \left(\frac{\sqrt{73}}{2}\right)^2 \\ &= \left(x + \frac{11}{2} - \frac{\sqrt{73}}{2}\right)\left(x + \frac{11}{2} + \frac{\sqrt{73}}{2}\right) \end{align*}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.94139164686203, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/14615/on-einsteins-original-paper-speed-of-light-in-different-reference-frames
On Einstein's original paper: speed of light in different reference frames On Einstein original paper "On the electrodynamics of moving bodies", on section 2 of the first part (Kinematics), the following thought experiment is described: a rod is imparted constant speed $v$ in the direction of the $xx$ axis (growing $x$). At both ends of the rod, A and B, there is a clock, both clocks are synchronous in the stationary system. Then, a ray of light is sent from A at time $t_A$, which is reflected at B at time $t_B$, and arrives at A at time $t'_A$. It is then stated that "taking into consideration the principle of the constancy of the velocity of light", we get: $$t_B - t_A = \frac{r_{AB}}{c - v} \text{and } t'_A - t_B = \frac{r_{AB}}{c + v}$$ where $r_{AB}$ is the length of the moving rod, measured in the stationary system. Now, if besides the rod's length, both time intervals refer to time in the stationary system, so must speed. Given the previous assumption of its constancy, why are terms like $c +v$ and $c -v$ in the above equations? Furthermore, from my previous knowledge of special relativity, I would have thought the observer moving with (moving) rod would declare the clocks to be synchronous, while the stationary observer would declare the clocks to be not synchronous. Yet the article states exactly the opposite. Why is this so? I know I'm missing something here, I just can't figure out exactly what it is. Thanks in advance for your help. - 2 Answers At time $t_A$, the ray of light has to travel distance $r_{AB}$ to get to the other end. Since the other end is travelling at a velocity $v$, the ray of light has to travel an additional distance $v(t_B - t_A)$ giving a total distance $r_{AB} + v(t_B - t_A)$. Since the ray of light travels at velocity $c$, this distance must also equal $c(t_B - t_A)$ $$r_{AB} + v(t_B - t_A) = c(t_B - t_A)$$ Which on rearranging gives one of the expressions in the question, and similar arguments gives the other. Usually, the clocks in the same frame are configured to be synchronous with one another. However, Einstein is showing that clocks that are synchronous with one another in one frame, aren't so in another. This is near the start of the paper where he hasn't derived the Lorentz transformations yet, but is concentrating on the physics. So at this stage, he's defined the moving clocks to be synchronous with those in the stationary frame, meaning they show the same time at the same location. For two clocks a distance apart, he's defined them to be synchronous with one another if $t_B - t_A = t'_A - t_B$ using the transmission and relflection of light as described in the question. And since in this case $t_B - t_A\neq t'_A - t_B$, moving clocks A and B are not synchronous with one another in the moving frame if synchronous in the stationary frame. - Thanks, that was most helpful. However, I still have a small itch. The calculation you show is how an observer in the stationary platform would proceed (i.e. taking into account the motion of the rod). Furthermore, Einstein states that $r_{AB}$ is "the length of the moving rod---measured in the stationary system". This would indicate that from the perspective of the stationary observer, the clocks at both ends of the rod (thus moving with it) would not be synchronous. Yet the article states this conclusion is drawn by observers moving with the rod. Where am I going astray? – wmnorth Sep 18 '11 at 11:12 The observers in the moving frame are applying the synchronisation criterion on their clocks as a check. But their clocks at each location in space is set to be the same as a clock in the stationary frame at the same position. From their point of view, a clock in the stationary frame is moving, of course, but that doesn't prevent them from reading off the time, and setting their stationary clock to match it. Their clocks therefore synchronize with those of the stationary frame. – John McVirgo Sep 18 '11 at 15:03 cont. The calculation is for the time shown on a clock in the stationary frame at the events of transmission and reflection of light performed by the observer in the moving frame and therefore what their clock reads. Remember the observer in the moving frame has used this time to synchronise his clocks to as defined by Einstein when he says in the passage "at the two ends A and B of the rod, clocks are placed which synchronize with the clocks of the stationary system, that is to say that their indications correspond at any instant to the "time of the stationary system". – John McVirgo Sep 18 '11 at 15:12 When setting up this description, the author states: We imagine further that at the two ends A and B of the rod, clocks are placed which synchronize with the clocks of the stationary system, that is to say that their indications correspond at any instant to the “time of the stationary system” at the places where they happen to be. These clocks are therefore “synchronous in the stationary system.” It is a part of the initial conditions that the clocks are synchronous in the stationary frame. The asynchroneity of the clocks in the rod frame is thus assured by the principle of relativity. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.957931637763977, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/1159/transformation-of-volatility-bs
# Transformation of Volatility - BS I have recently seen a paper about the Boeing approach that replaces the "normal" Stdev in the BS formula with the Stdev \begin{equation} \sigma'=\sqrt{\frac{ln(1+\frac{\sigma}{\mu})^{2}}{t}} \end{equation} $\sigma$ and $\mu$ being the "normal" Stdev and Mean, respectively. (Both in absolute values, resulting from a simulation of the pay-offs.) Since it is about real options, it sounds reasonable to have the volatility decrease approaching the execution date of a project, but why design the volatility like this? I have plotted the function here via Wolframalpha.com. Even though the volatility should be somewhere around 10% in this example, it never assumes that value. Why does that make sense? I've run a simulation and compared the values. Since the volatility changes significantly, the option value changes, of course, are significant. Here some equivalent expressions. Maybe it reminds somebody of something that might help? $\Longleftrightarrow t\sigma'^{2}=ln(1+\frac{\sigma}{\mu})^{2}$ $\Longleftrightarrow\sqrt{exp(t\sigma'^{2})}-1=\frac{\sigma}{\mu}$ $\Longleftrightarrow\sigma=\mu\left[\sqrt{exp(t\sigma'^{2})}-1\right]$ It somehow looks similar to the arithmetic moments of the log-normal distribution, but it does not fit 100%. - real options... have they started to take into account that one is short one's competitor's options ? – nicolas May 22 '11 at 18:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8898986577987671, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/1927/whats-the-name-of-this-set
# What's the name of this set Let's take field $(K, +,\times,0,1)$. Let's take set $S(K) \subset K$ such that: $$\forall_{e \in K}\exists_{a \in S(K)}\exists_{p \in K} e = a \times p \times p$$ $$\lnot \exists_{a, b \in S(K)}\exists_{p \in K} (a \times p \times p = b \land a \neq b)$$ For example $S(\mathbb{R}) = \{-1, 1\}$, $S(\mathbb{C}) = \{1\}$, $S(\mathbb{Z}_3) = \{1, 2\}$, $S(\mathbb{Z}_5) = \{1, 2\}$. Of course sets are not unique like $S(\mathbb{R}) = \{-e,\pi\}$ is also possible. (Sorry if I misuse terms - I haven't found english translation - but I found it useful to generalize the signature of quadratic polynomials). - ## 1 Answer If I am deciphering your notation correctly, I would call $S$ "a set of coset representatives for $K^{\times}/(K^{\times})^2$." - I haven't check it fully but from what I understend it looks like this. – Maciej Piechotka Aug 9 '10 at 20:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9176984429359436, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/feynman-diagram+gauge-theory
# Tagged Questions 1answer 58 views ### Four-gauge-boson vertex in non-Abelian gauge theories In Peskin & Schroeder's book page 524, the following diagram is calculated for the gauge boson self-energy in order $g^2$: In dimensional regularization, its contribution is given by ... 1answer 211 views ### Can a photon see ghosts? Does it make sense to introduce Faddeev–Popov ghost fields for abelian gauge field theories? Wikipedia says the coupling term in the Lagrangian "doesn't have any effect", but I don't really know ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073473811149597, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/2628/linear-regression-and-assets-direction-prediction
# Linear regression and assets direction prediction I have the following asset returns Y and the predictions for the same periods Y': ````Y = { 10, 200, -1000, -1, -7 } Y' = { 1, 2, -3, -4, -5 } ```` The OLR R-squared for these 2 vectors is 0.11 and the F-statistic 0.39, so clearly the explained variance is not very high. However variance analysis does not show that all the predicitions in Y' matched the same return direction than Y. To capture this point I would have to run a separate study counting each (Yn, Y'n) pair that has the same sign. Are there better ways to fit a model and optimize the IVs coefficients for return direction? Ideally I would like to fit a model that gives more weigth to assets direction, then variance. - ## 1 Answer It sounds like all you need is to run a logistic regression, with the sign of $Y$ as your dependent variable instead of $Y$ itself. This will only give weight to the sign of the variable, and not to the magnitude. Once you have reformulated your question in more general terms (sign and magnitude of $Y$, rather than direction and volatility), you may be able to get further help from Cross Validated. - Yes, that's what I was suspecting with my partial understanding of logistic regression. But let's stretch my question a bit: what if I wanted to fit a model for different return targets rather than just an up or down outcome. Would I have to run a logistic regression for each target level and then compare each r-squared or are there SEMs that can solve for the best return target (in the variance sense) and direction? – Robert Kubrick Dec 20 '11 at 15:55 – Tal Fishman Dec 20 '11 at 15:57 It is, these questions have been a bit too beginner level for this site. Cross Validated is more open to beginners, I think. – Tal Fishman Dec 20 '11 at 16:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349676966667175, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/98789?sort=oldest
## Stability of the spectrum for perturbations of the boundary ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider the Laplace operator on a smooth bounded open set with Dirichlet boundary conditions. I need some result of the following type: if one perturbs the boundary in a suitable sense to be determined, say depending on a small parameter $\epsilon$, then it is possible to order eigenvalues and eigenfunctions so that they are smooth functions of $\epsilon$ and spatial variables. At first this seemed a very natural result, but the total scarcity of references suggests otherwise, and I am starting to think that this kind of result is rather difficult, if one wants to achieve some generality. Has anyone encountered any result in this direction? - Assume your domains are diffeomorphic to a ball. Then instead of looking at the Laplace operator on the domain, you can look at the Laplace-Beltrami on the ball with a strange metric. This reduces the problem to comparing the Laplace operator for two different metrics. If I remember correctly, the main terms will be the same, so all that is left is a small perturbation in an appropriate operator theoretical sense. There might be quite a few details here that are annoying to work out, in particular if the closeness of metrics corresponds to the closeness you want. – Helge Jul 21 at 0:00 Indeed, but I guess one can obtain at most the asymptotics of eigenvalues and some quantities connected with the eigenfunctions, while I need more or less a sort of Fourier expansion depending on the parameter (probably too much) – Piero D'Ancona Jul 21 at 17:25 ## 3 Answers This is true, as long as your domain depends smoothly upon one real parameter. Say that you are insterested in the $n$ first eigenvalues. Using a Lyapunov-Schmidt procedure, you may reduce to the situation of an $n\times n$ symmetric matrix $S(\epsilon)$. Then look at Kato's book in the Grundlehren series. If instead your domain depends on two or more parameters, the matrix $S$ will depend on several variables, and the eigenvalues will not be smooth functions, unless they remain simple. - Thank you Denis. Actually the domains I have in mind are sections of a higher dimensional domain, and I am looking for sensible assumptions on the global domain such that the sections have (locally) similar spectral properties. Clearly in the greatest generality this seems hopeless, but maybe for domains with a special structure... – Piero D'Ancona Jun 4 at 19:09 By the way, when I say smooth I mean $C^2$, but I could probably go a long way with Lipschitz only – Piero D'Ancona Jun 4 at 19:11 1 I remember that F. Murat wrote a paper on the dependency of elliptic BVPs upon the domain. – Denis Serre Jun 5 at 5:05 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Look at Perturbation of the boundary in boundary-value problems in partial differential equations'' by Dan Henry, London Math Society Lecture Notes #318 - Thank you, I'll take a look – Piero D'Ancona Jun 4 at 19:09 Ah, now I recall I already checked that book, but it gave information on the eigenvalues mostly. I would rather need some kind of Fourier expansion depending on the parameter(s). It is easy to conceive some special cases (e.g. a ball with varying radius) and I was hoping some more general class of examples could be known. – Piero D'Ancona Jun 4 at 19:21 Hello, My name is Marcus Morocco. My doctoral thesis was on exactly these issues. I calculated the expressions for the first and second derivatives of eigenvalues ​​and eigenvectors of the laplace opelador ccom Neumann boundary condition. If interest can send the file. - Thank you for the info. Did you publish your results? – Piero D'Ancona Jul 21 at 17:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8969725370407104, "perplexity_flag": "head"}
http://mathoverflow.net/questions/106527?sort=votes
## Hyper(co)homology of exact (acyclic) complexes ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\mathcal{A}$ be an abelican category with enough injectives, let $K^\bullet \in Kom^+(\mathcal{A})$ be a complex, where $Kom^+(\mathcal{A})$ is the category of cochain complexes over $\mathcal{A}$ bounded on the left. I read in Weibel's Homological algebra book that the Hyper(co)homology $\mathbb{H}^n(K^\bullet)$ of an exact complex $K^\bullet$ is $0$ for all $n$, I also read somewhere else that the hyper(co)homology of an acyclic complex is $0$, but I didn't find a proof for both cases. Now let's say that I got a complex $K^\bullet$ and that the cohomology of this complex $H^n(K^\bullet) = 0$ for $n \geq m$, for some $m$. I want to know if this implies that $\mathbb{H}^n(K^\bullet) = 0$ for $n \geq m$? I couldn't find a reference for either case (exact and acyclic), can anybody help me? - 3 Hypercohomology is with respect to a functor on the category $A$. What is that functor in this case ? The fact that hypercohomology of a complex with respect to a functor will vanish when the complex is exact is a consequence of the existence of the first hypercohomology spectral sequence. See for instance Griffiths and Harris, Principles ..., p446. The $E_2$ page of the spectral sequence is $0$ so the same must be true of its abutment. – Damian Rössler Sep 6 at 16:23 Thank you, I found the book, very clear exposition, thanks – Mario Carrasco Sep 7 at 18:10 ## 2 Answers Hypercohomology (with respect to any functor) of an exact=acyclic complex is zero, because it is defined on the derived category and an exact complex is quasi-isomorphic to the zero object. The question in your last paragraph is only wishful thinking. If this were true, there would not be any cohomology theories. For an example to see that this fails take an arbitrary (say finite type over an algebraically closed field) scheme $X$ with a coherent sheaf $\mathscr F$ that has nontrivial higher cohomology groups. (Say the sheaf $\mathscr O(-2)$ on $\mathbb P^1$). Then take $\mathcal A$ to be the category of coherent sheaves on $X$, $K^\bullet=\mathscr F$ (that is $K^0=\mathscr F$ and $K^i=0$ for $i\neq 0$) and consider the hypercohomology corresponding to the global section functor. By construction $h^n(K^\bullet)=0$ for $n\neq 0$, but $\mathbb H^n(K^\bullet)\neq 0$ for some $n>0$. (In the case of $\mathscr O(-2)$ on $\mathbb P^1$, we have $\mathbb H^1(K^\bullet)\neq 0$). Obviously there are many similar examples. - You mean even for an "acyclic complex" $K^\bullet$, i.e. one with $H^n(K^\bullet) = 0$ for $n \geq 1$?? I mean a complex $K^\bullet$ whose cohomology (as a complex) vanishes for $n \geq 1$, I think that's the wrong terminology. I read in Shafarevich's book about algebraic geometry, that the hypercohomology of a complex of sheaves that is 'acyclic' (NOT a complex of 'acyclic sheaves') is 0. I'm confused now – Mario Carrasco Sep 7 at 18:34 1 I believe a complex $K^\bullet$ is acyclic if $h^n(K^\bullet)=0$ for all $n\in\mathbb Z$. – Sándor Kovács Sep 7 at 19:18 Perhaps you are thinking of replacing an object with its resolution, i.e., a complex that is exact everywhere except at degree $0$? That is not an acyclic complex and usually has (hyper)cohomology. – Sándor Kovács Sep 7 at 19:21 You know I was confused by the whole exact or acyclic thing, these guys say that EXACT is not the same as ACYCLIC: math.stackexchange.com/questions/27105/…. According to these guys my complex is acyclic, so I guess I was being misled the entire time? Shafarevich wrote in his book that an ACYCLIC complex of sheaves has hypercohomology 0, but didn't provide proof, when I read that (thinking that my complex is acyclic) I thought that I was done, now I guess he's using the same definition as you and not the one in the discussion in the link, I feel so bad right now – Mario Carrasco Sep 7 at 19:57 2 I added a competing answer to that question on math.stackexchange – Sándor Kovács Sep 7 at 20:41 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. EDIT: reading your question again, I'm a bit confused about both terminology and what you are actually asking. Below by an exact complex I mean one with vanishing cohomology (as a complex!), whereas by an acyclic complex I mean a complex of ($\Gamma$-)acyclic objects. In general, the hypercohomology of an acyclic complex is easy to compute (see first part of my answer), and the hypercohomology of an exact complex is zero (see second half). To elaborate on Damian's comment (as Sándor Kovács points out, the second half is wishful thinking): One way to define hypercohomology of a complex $K^\bullet$ is to take a Cartan-Eilenberg resolution $K^{\bullet, \bullet}$ and then take the cohomology of the total complex of $K^{\bullet, \bullet}$. Cartan-Eilenberg resolutions are pretty awesome (they have basically all nice properties you could ask for), but in particular every columns is an injective resolution of the respective term of $K^\bullet$. Now given any left-bounded double complex $K^{\bullet, \bullet}$ there exist two spectral sequences converging to the cohomology of the total complex (see Weibel). The $E_2$ pages are obtained by taking first horizontal and then vertical cohomology, and vice versa. In what follows, I shall write $h^p(K^\bullet)$ for the cohomology of the complex $K^\bullet$, and $H^p(K)$ for the derived functor of global sections on $K$. In our case, taking vertical cohomology first, we get an $E_1$ page $H^q(K^p)$. By your acyclicity assumption, we have $H^q(K^p) = 0$ for $q>0$, i.e. the $E^1$ page just consists of $K^\bullet$ again in the lowest row, and zeros else. Thus the spectral sequence degenerates at the $E_2$ page, leaving you with $h^p(\Gamma(K^\bullet))$ for the cohomology of the total complex, i.e. hypercohomology (see Weibel again for how the separate pages relate to the cohomology of the total complex). This is the statement you were asking for (to the extent that I am aware of a correct formulation): the hypercohomology of an acyclic complex is just the cohomology of its complex of global sections. In particular an exact acyclic complex has vanishing hypercohomology. More generally, any exact complex has vanishing hypercohomology: this time look at the spectral sequence where we take horizontal cohomology first. Another awesome property of cartan-eilenberg resolutions: the horizontal boundaries and cycles are themselves injectives, and hence taking horizontal cohomology of the cartan-eilenberg resolution yields an injective resolution of the cohomology of $K^\bullet$. There is a small difficulty: we are not supposed to compute the hypercohomology of $K^{\bullet,\bullet}$ but of $\Gamma(K^{\bullet,\bullet})$. However, since injectives are acyclic, "taking horizontal cohomology" and "taking global sections" actually commutes in our case, so we are good. In summary: there exists a convergent spectral sequence with $E_2$ page $H^q(h^p K^\bullet)$, which is evidently zero if $K^\bullet$ is exact. Even more generally (and by a very similar argument), hypercohomology factors through quasi-isomorphism. Note: again as Damian points out, "hypercohomology on some abelian category" does not really make sense. You can in general define hyperderived functors, and then hypercohomology is usually the name for the hyperderived functor of global sections. My above statements hold for the hyperderived functors of an arbitrary left-exact additive functor $\Gamma$. - No, by "acyclic complex" I meant a complex $K^\bullet$ with vanishing cohomology for $n \geq 1$, not a complex of "acyclic objects", I think I'm using the wrong terminology here, the thing is that my complex $K^\bullet$ fails to be exact by one degree. I read in Shafarevich's algebraic geometry book 2 things: 1) That the hypercohomology of a complex of acyclic sheaves is the cohomology of the complex of global sections (as you wrote) and 2) That the hypercohomology of a complex that is 'acyclic' (which I'm assuming acyclic as a complex, i.e. the thing that I have) is 0. Thank you so much BTW – Mario Carrasco Sep 7 at 18:28 I see; my use of the terminology may be wrong as well. In any case, just to have "almost all" cohomology of $K$ vanish is obviously not enough. – Tom Bachmann Sep 7 at 18:53 Then Shafarevich's book is wrong? – Mario Carrasco Sep 7 at 19:10 @Mario: that's not what an acyclic complex is. – Sándor Kovács Sep 7 at 19:50 Thank YOU. Another thing I was so confused about is: You know how in an abelican category $\mathcal{A}$, you have the derived functors $R^nF(A)$ of a left exact functor $F$ of an object $A$. To say that the object $A$ is acyclic is to say that the $R^nF(A) = 0$ for $n \geq 1$ right? But that means that the complex $0 \rightarrow F(I^0) \rightarrow F(I^1) \rightarrow F(I^2) \cdots$ is exact except at 0 right?? I get confused because I've seen some people use $0 \rightarrow F(I^1) \rightarrow F(I^2) \rightarrow F(I^3) \cdots$ – Mario Carrasco Sep 7 at 21:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 83, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438184499740601, "perplexity_flag": "head"}
http://mathoverflow.net/questions/100248?sort=votes
## Application of inverse function theorem to get short time existence ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am reading a book on curve shortening flow. Optionally, please see this image for the page that is confusing me (I am not allowed to include it in this post since I'm new): http://i.stack.imgur.com/L54lm.png [Thanks to user Leonid from SE for the image. Page 17 of The Curve Shortening Problem by Kai Seng Chou and Xi-Ping Zhu] The authors construct a map $\mathcal{F}$ from $\tilde{C}^{k+2, \alpha}(S^1 \times (0,t))$ to $\tilde{C}^{k, \alpha}(S^1 \times (0,t))$, find its Frechet derivative and show it's an isomorphism, so we can use the inverse function theorem. They say there exists a $t_0$, $\epsilon$ and $\delta$ such that for any $f$ with $\lVert f - \mathcal{F}(v) \rVert < \epsilon$ there exists a unique $u$ such that $\lVert u - v \rVert < \delta$ and $\mathcal{F}(u) = f$ for all $t \leq t_0$. I am confused about the part they say that "there exists a $t_0$ ... such that $\mathcal{F}(u) = f$ for all $t \leq t_0$". How does this time dependence come into this from the inverse function theorem? The inverse function theorem I know doesn't state anything about this time dependence. The proof is confusingly written (for me anyway). If they fix the space to be $\tilde{C}^{k, \alpha}(S^1 \times (0,t))$ then how can they only say that the solution exists within a neighbourhood of the $(0,t)$? I thought you don't get control of that, only the space of functions. Can anyone explain this? Are they using some other theorem? Thanks. - ## 1 Answer Perhaps something can be gleaned from the analogous proof of short-time existence for scalar ODEs: Consider the initial value problem $x' = f(x)$ with $x(0) = x_0$ and define $F(x)(t) = x(t) - x_0 - \int_0^t f(x(s))ds$ so that zeros of F correspond to solutions of the IVP. Regard the function F as acting on some space of functions whose elements obey $x(0) = x_0$. The Derivative of F is $(F'(x)y)(t) = y(t) - \int_0^t f'(x(s))y(s)ds$. To show that F' is an isomorphism, one wants to show that the norm of $y \mapsto \int_0^t f'(x(s))y(s)ds$ is less than one. If you are working on a space of functions from $[-T,T] \to R$, then a cheap estimate is given by T times the maximum value that the absolute value of $f'$ takes. This can be made smaller than one by choosing T sufficiently small. Of course f' has to have a maximum value in the first place. This is dealt with through a short song-and-dance in which one works in an open subset of the function space for which the functions x take a restricted set of values so that on these values f' does take a maximum absolute value. I suspect that something similar is going on here. More generally, given an operator A that can be regarded as acting on either a Banach space X or some other Banach space Y, the spectrum of A in general will depend upon the Banach space. That the note produced by a vibrating harp string depends on the length of the string furnishes an example of this phenomenon. (A is the second derivative, X is the set of functions from $[0,L_x]$ to R with Dirichlet boundary conditions and Y is the set of functions from $[0,L_y]$ to R with Dirichlet boundary conditions.) In your situation again the different function spaces contain points which themselves are functions defined on shorter or longer time intervals. - 1 Indeed, the argument for short time existence of a parabolic PDE is essentially the same as the proof for the short time existence of a system of first order ODE's. The only difference is that the curve is a map into a carefully chosen Banach space instead of $R^n$. – Deane Yang Jun 21 at 16:44 Thanks for the reply. – quentinknight Jun 23 at 17:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300713539123535, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Mindlin%e2%80%93Reissner_plate_theory
Mindlin–Reissner plate theory Deformation of a plate highlighting the displacement, the mid-surface (red) and the normal to the mid-surface (blue) The Mindlin-Reissner theory of plates is an extension of Kirchhoff–Love plate theory that takes into account shear deformations through-the-thickness of a plate. The theory was proposed in 1951 by Raymond Mindlin.[1] A similar, but not identical, theory had been proposed earlier by Eric Reissner in 1945.[2] Both theories are intended for thick plates in which the normal to the mid-surface remains straight but not necessarily perpendicular to the mid-surface. The Mindlin-Reissner theory is used to calculate the deformations and stresses in a plate whose thickness is of the order of one tenth the planar dimensions while the Kirchhoff-Love theory is applicable to thinner plates. The form of Mindlin–Reissner plate theory that is most commonly used is actually due to Mindlin and is more properly called Mindlin plate theory.[3] The Reissner theory is slightly different. Both theories include in-plane shear strains and both are extensions of Kirchhoff-Love plate theory incorporating first-order shear effects. Mindlin's theory assumes that there is a linear variation of displacement across the plate thickness but that the plate thickness does not change during deformation. This implies that the normal stress through the thickness is ignored; an assumption which is also called the plane stress condition. On the other hand, Reissner's theory assumes that the bending stress is linear while the shear stress is quadratic through the thickness of the plate. This leads to a situation where the displacement through-the-thickness is not necessarily linear and where the plate thickness may change during deformation. Therefore, Reissner's theory does not invoke the plane stress condition. The Mindlin-Reissner theory is often called the first-order shear deformation theory of plates. Since a first-order shear deformation theory implies a linear displacement variation through the thickness, it is incompatible with Reissner's plate theory. Mindlin theory Mindlin's theory was originally derived for isotropic plates using equilibrium considerations. A more general version of the theory based on energy considerations is discussed here.[4] Assumed displacement field The Mindlin hypothesis implies that the displacements in the plate have the form $\begin{align} u_\alpha(\mathbf{x}) & = u^0_\alpha(x_1,x_2) - x_3~\varphi_\alpha ~;~~\alpha=1,2 \\ u_3(\mathbf{x}) & = w^0(x_1, x_2) \end{align}$ where $x_1$ and $x_2$ are the Cartesian coordinates on the mid-surface of the undeformed plate and $x_3$ is the coordinate for the thickness direction, $u^0_\alpha,~ \alpha=1,2$ are the in-plane displacements of the mid-surface, $w^0$ is the displacement of the mid-surface in the $x_3$ direction, $\varphi_1$ and $\varphi_2$ designate the angles which the normal to the mid-surface makes with the $x_3$ axis. Unlike Kirchhoff-Love plate theory where $\varphi_\alpha$ are directly related to $w^0$, Mindlin's theory requires that $\varphi_1 \ne w^0_{,1}$ and $\varphi_2 \ne w^0_{,2}$. Displacement of the mid-surface (left) and of a normal (right) Strain-displacement relations Depending on the amount of rotation of the plate normals two different approximations for the strains can be derived from the basic kinematic assumptions. For small strains and small rotations the strain-displacement relations for Mindlin-Reissner plates are $\begin{align} \varepsilon_{\alpha\beta} & = \frac{1}{2}(u^0_{\alpha,\beta}+u^0_{\beta,\alpha}) - \frac{x_3}{2}~(\varphi_{\alpha,\beta}+\varphi_{\beta,\alpha}) \\ \varepsilon_{\alpha 3} & = \cfrac{1}{2}\left(w^0_{,\alpha}- \varphi_\alpha\right) \\ \varepsilon_{33} & = 0 \end{align}$ The shear strain, and hence the shear stress, across the thickness of the plate is not neglected in this theory. However, the shear strain is constant across the thickness of the plate. This cannot be accurate since the shear stress is known to be parabolic even for simple plate geometries. To account for the inaccuracy in the shear strain, a shear correction factor ($\kappa$) is applied so that the correct amount of internal energy is predicted by the theory. Then $\varepsilon_{\alpha 3} = \cfrac{1}{2}~\kappa~\left(w^0_{,\alpha}- \varphi_\alpha\right)$ Equilibrium equations The equilibrium equations of a Mindlin-Reissner plate for small strains and small rotations have the form $\begin{align} & N_{\alpha\beta,\alpha} = 0 \\ & M_{\alpha\beta,\beta}-Q_\alpha = 0 \\ & Q_{\alpha,\alpha}+q = 0 \end{align}$ where $q$ is an applied out-of-plane load, the in-plane stress resultants are defined as $N_{\alpha\beta} := \int_{-h}^h \sigma_{\alpha\beta}~dx_3 \,,$ the moment resultants are defined as $M_{\alpha\beta} := \int_{-h}^h x_3~\sigma_{\alpha\beta}~dx_3 \,,$ and the shear resultants are defined as $Q_\alpha := \kappa~\int_{-h}^h \sigma_{\alpha 3}~dx_3 \,.$ Derivation of equilibrium equations For the situation where the strains and rotations of the plate are small the virtual internal energy is given by $\begin{align} \delta U & = \int_{\Omega^0} \int_{-h}^h \boldsymbol{\sigma}:\delta\boldsymbol{\epsilon}~dx_3~d\Omega = \int_{\Omega^0} \int_{-h}^h \left[\sigma_{\alpha\beta}~\delta\varepsilon_{\alpha\beta} + 2~\kappa~\sigma_{\alpha 3}~\delta\varepsilon_{\alpha 3}\right]~dx_3~d\Omega \\ & = \int_{\Omega^0} \int_{-h}^h \left[\frac{1}{2}~\sigma_{\alpha\beta}~(\delta u^0_{\alpha,\beta}+\delta u^0_{\beta,\alpha}) - \frac{x_3}{2}~\sigma_{\alpha\beta}~(\delta \varphi_{\alpha,\beta}+\delta\varphi_{\beta,\alpha}) + \kappa~\sigma_{\alpha 3}\left(\delta w^0_{,\alpha} - \delta \varphi_\alpha\right)\right]~dx_3~d\Omega \\ & = \int_{\Omega^0} \left[\frac{1}{2}~N_{\alpha\beta}~(\delta u^0_{\alpha,\beta}+\delta u^0_{\beta,\alpha}) - \frac{1}{2}M_{\alpha\beta}~(\delta \varphi_{\alpha,\beta}+\delta\varphi_{\beta,\alpha}) + Q_\alpha\left(\delta w^0_{,\alpha} - \delta \varphi_\alpha\right)\right]~d\Omega \end{align}$ where the stress resultants and stress moment resultants are defined in a way similar to that for Kirchhoff plates. The shear resultant is defined as $Q_\alpha := \kappa~\int_{-h}^h \sigma_{\alpha 3}~dx_3$ Integration by parts gives $\begin{align} \delta U & = \int_{\Omega^0} \left[-\frac{1}{2}~(N_{\alpha\beta,\beta}~\delta u^0_{\alpha}+N_{\alpha\beta,\alpha}~\delta u^0_{\beta}) + \frac{1}{2}(M_{\alpha\beta,\beta}~\delta \varphi_{\alpha}+M_{\alpha\beta,\alpha}\delta\varphi_{\beta}) - Q_{\alpha,\alpha}~\delta w^0 - Q_\alpha~\delta\varphi_\alpha\right]~d\Omega \\ & + \int_{\Gamma^0} \left[\frac{1}{2}~(n_\beta~N_{\alpha\beta}~\delta u^0_\alpha+n_\alpha~N_{\alpha\beta}~\delta u^0_{\beta}) - \frac{1}{2}(n_\beta~M_{\alpha\beta}~\delta \varphi_{\alpha}+n_\alpha M_{\alpha\beta}\delta\varphi_\beta) + n_\alpha~Q_\alpha~\delta w^0\right]~d\Gamma \end{align}$ The symmetry of the stress tensor implies that $N_{\alpha\beta} = N_{\beta\alpha}$ and $M_{\alpha\beta} = M_{\beta\alpha}$. Hence, $\begin{align} \delta U & = \int_{\Omega^0} \left[-N_{\alpha\beta,\alpha}~\delta u^0_{\beta} + \left(M_{\alpha\beta,\beta}-Q_\alpha\right)~\delta \varphi_{\alpha} - Q_{\alpha,\alpha}~\delta w^0\right]~d\Omega \\ & + \int_{\Gamma^0} \left[n_\alpha~N_{\alpha\beta}~\delta u^0_{\beta} - n_\beta~M_{\alpha\beta}~\delta \varphi_{\alpha} + n_\alpha~Q_\alpha~\delta w^0\right]~d\Gamma \end{align}$ For the special case when the top surface of the plate is loaded by a force per unit area $q(\mathbf{x}^0)$, the virtual work done by the external forces is $\delta V_{\mathrm{ext}} = \int_{\Omega^0} q~\delta w^0~\mathrm{d}\Omega$ Then, from the principle of virtual work, $\begin{align} & \int_{\Omega^0} \left[N_{\alpha\beta,\alpha}~\delta u^0_{\beta} - \left(M_{\alpha\beta,\beta}-Q_\alpha\right)~\delta \varphi_{\alpha} + \left(Q_{\alpha,\alpha}+q\right)~\delta w^0 \right]~d\Omega \\ & \qquad \qquad = \int_{\Gamma^0} \left[n_\alpha~N_{\alpha\beta}~\delta u^0_{\beta} - n_\beta~M_{\alpha\beta}~\delta \varphi_{\alpha} + n_\alpha~Q_\alpha~\delta w^0\right]~d\Gamma \end{align}$ Using standard arguments from the calculus of variations, the equilibrium equations for a Mindlin-Reissner plate are $\begin{align} & N_{\alpha\beta,\alpha} = 0 \\ & M_{\alpha\beta,\beta}-Q_\alpha = 0 \\ & Q_{\alpha,\alpha}+q = 0 \end{align}$ Bending moments and normal stresses Torques and shear stresses Shear resultant and shear stresses Boundary conditions The boundary conditions are indicated by the boundary terms in the principle of virtual work. If the only external force is a vertical force on the top surface of the plate, the boundary conditions are $\begin{align} n_\alpha~N_{\alpha\beta} & \quad \mathrm{or} \quad u^0_\beta \\ n_\alpha~M_{\alpha\beta} & \quad \mathrm{or} \quad \varphi_\alpha \\ n_\alpha~Q_\alpha & \quad \mathrm{or} \quad w^0 \end{align}$ Stress-strain relations The stress-strain relations for a linear elastic Mindlin-Reissner plate are given by $\begin{align} \sigma_{\alpha\beta} & = C_{\alpha\beta\gamma\theta}~\varepsilon_{\gamma\theta} \\ \sigma_{\alpha 3} & = C_{\alpha 3\gamma\theta}~\varepsilon_{\gamma\theta} \\ \sigma_{33} & = C_{33\gamma\theta}~\varepsilon_{\gamma\theta} \end{align}$ Since $\sigma_{33}$ does not appear in the equilibrium equations it is implicitly assumed that it do not have any effect on the momentum balance and is neglected. This assumption is also called the plane stress assumption. The remaining stress-strain relations for an orthotropic material, in matrix form, can be written as $\begin{bmatrix}\sigma_{11} \\ \sigma_{22} \\ \sigma_{23} \\ \sigma_{31} \\ \sigma_{12} \end{bmatrix} = \begin{bmatrix} C_{11} & C_{12} & 0 & 0 & 0 \\ C_{12} & C_{22} & 0 & 0 & 0 \\ 0 & 0 & C_{44} & 0 & 0 \\ 0 & 0 & 0 & C_{55} & 0 \\ 0 & 0 & 0 & 0 & C_{66}\end{bmatrix} \begin{bmatrix}\varepsilon_{11} \\ \varepsilon_{22} \\ \varepsilon_{23} \\ \varepsilon_{31} \\\varepsilon_{12}\end{bmatrix}$ Then, $\begin{bmatrix}N_{11} \\ N_{22} \\ N_{12} \end{bmatrix} = \int_{-h}^h \begin{bmatrix} C_{11} & C_{12} & 0 \\ C_{12} & C_{22} & 0 \\ 0 & 0 & C_{66} \end{bmatrix} \begin{bmatrix}\varepsilon_{11} \\ \varepsilon_{22} \\ \varepsilon_{12} \end{bmatrix} dx_3 = \left\{ \int_{-h}^h \begin{bmatrix} C_{11} & C_{12} & 0 \\ C_{12} & C_{22} & 0 \\ 0 & 0 & C_{66} \end{bmatrix}~dx_3 \right\} \begin{bmatrix} u^0_{1,1} \\ u^0_{2,2} \\ \frac{1}{2}~(u^0_{1,2}+u^0_{2,1}) \end{bmatrix}$ and $\begin{bmatrix}M_{11} \\ M_{22} \\ M_{12} \end{bmatrix} = \int_{-h}^h x_3~\begin{bmatrix} C_{11} & C_{12} & 0 \\ C_{12} & C_{22} & 0 \\ 0 & 0 & C_{66} \end{bmatrix} \begin{bmatrix}\varepsilon_{11} \\ \varepsilon_{22} \\ \varepsilon_{12} \end{bmatrix} dx_3 = -\left\{ \int_{-h}^h x_3^2~\begin{bmatrix} C_{11} & C_{12} & 0 \\ C_{12} & C_{22} & 0 \\ 0 & 0 & C_{66} \end{bmatrix}~dx_3 \right\} \begin{bmatrix} \varphi_{1,1} \\ \varphi_{2,2} \\ \frac{1}{2}(\varphi_{1,2}+\varphi_{2,1}) \end{bmatrix}$ For the shear terms $\begin{bmatrix}Q_1 \\ Q_2 \end{bmatrix} = \kappa~\int_{-h}^h \begin{bmatrix} C_{55} & 0 \\ 0 & C_{44} \end{bmatrix} \begin{bmatrix}\varepsilon_{31} \\ \varepsilon_{32} \end{bmatrix} dx_3 = \cfrac{\kappa}{2}\left\{ \int_{-h}^h \begin{bmatrix} C_{55} & 0 \\ 0 & C_{44} \end{bmatrix}~dx_3 \right\} \begin{bmatrix} w^0_{,1} - \varphi_1 \\ w^0_{,2} - \varphi_2 \end{bmatrix}$ The extensional stiffnesses are the quantities $A_{\alpha\beta} := \int_{-h}^h C_{\alpha\beta}~dx_3$ The bending stiffnesses are the quantities $D_{\alpha\beta} := \int_{-h}^h x_3^2~C_{\alpha\beta}~dx_3 \,.$ Mindlin theory for isotropic plates For uniformly thick, homogeneous, and isotropic plates, the stress-strain relations in the plane of the plate are $\begin{bmatrix}\sigma_{11} \\ \sigma_{22} \\ \sigma_{12} \end{bmatrix} = \cfrac{E}{1-\nu^2} \begin{bmatrix} 1 & \nu & 0 \\ \nu & 1 & 0 \\ 0 & 0 & 1-\nu \end{bmatrix} \begin{bmatrix}\varepsilon_{11} \\ \varepsilon_{22} \\ \varepsilon_{12} \end{bmatrix} \,.$ where $E$ is the Young's modulus, $\nu$ is the Poisson's ratio, and $\varepsilon_{\alpha\beta}$ are the in-plane strains. The through-the-thickness shear stresses and strains are related by $\sigma_{31} = 2G\varepsilon_{31} \quad \text{and} \quad \sigma_{32} = 2G\varepsilon_{32}$ where $G = E/(2(1+\nu))$ is the shear modulus. Constitutive relations The relations between the stress resultants and the generalized deformations are, $\begin{bmatrix}N_{11} \\ N_{22} \\ N_{12} \end{bmatrix} = \cfrac{2Eh}{1-\nu^2} \begin{bmatrix} 1 & \nu & 0 \\ \nu & 1 & 0 \\ 0 & 0 & 1-\nu \end{bmatrix} \begin{bmatrix} u^0_{1,1} \\ u^0_{2,2} \\ \frac{1}{2}~(u^0_{1,2}+u^0_{2,1}) \end{bmatrix} \,,$ $\begin{bmatrix}M_{11} \\ M_{22} \\ M_{12} \end{bmatrix} = -\cfrac{2Eh^3}{3(1-\nu^2)} \begin{bmatrix} 1 & \nu & 0 \\ \nu & 1 & 0 \\ 0 & 0 & 1-\nu \end{bmatrix} \begin{bmatrix} \varphi_{1,1} \\ \varphi_{2,2} \\ \frac{1}{2}(\varphi_{1,2}+\varphi_{2,1}) \end{bmatrix} \,,$ and $\begin{bmatrix}Q_1 \\ Q_2 \end{bmatrix} = \kappa G h \begin{bmatrix} w^0_{,1} - \varphi_1 \\ w^0_{,2} - \varphi_2 \end{bmatrix} \,.$ The bending rigidity is defined as the quantity $D = \cfrac{2Eh^3}{3(1-\nu^2)} \,.$ For a plate of thickness $h$, the bending rigidity has the form $D = \cfrac{Eh^3}{12(1-\nu^2)} \,.$ Governing equations If we ignore the in-plane extension of the plate, the governing equations are $\begin{align} M_{\alpha\beta,\beta}-Q_\alpha & = 0 \\ Q_{\alpha,\alpha}+q & = 0 \,. \end{align}$ In terms of the generalized deformations, these equations can be written as $\begin{align} &\nabla^2 \left(\frac{\partial \varphi_1}{\partial x_1} + \frac{\partial \varphi_2}{\partial x_2}\right) = -\frac{q}{D} \\ &\nabla^2 w^0 - \frac{\partial \varphi_1}{\partial x_1} - \frac{\partial \varphi_2}{\partial x_2} = -\frac{q}{\kappa G h} \\ &\nabla^2 \left(\frac{\partial \varphi_1}{\partial x_2} - \frac{\partial \varphi_2}{\partial x_1}\right) = -\frac{2\kappa G h}{D(1-\nu)}\left(\frac{\partial \varphi_1}{\partial x_2} - \frac{\partial \varphi_2}{\partial x_1}\right) \,. \end{align}$ Derivation of equilibrium equations in terms of deformations If we expand out the governing equations of a Mindlin plate, we have $\begin{align} \frac{\partial M_{11}}{\partial x_1} + \frac{\partial M_{12}}{\partial x_2} & = Q_1 \quad\,,\quad \frac{\partial M_{21}}{\partial x_1} + \frac{\partial M_{22}}{\partial x_2} = Q_2 \\ \frac{\partial Q_1}{\partial x_1} + \frac{\partial Q_2}{\partial x_2} & = - q \,. \end{align}$ Recalling that $M_{11} = -D\left(\frac{\partial \varphi_1}{\partial x_1}+\nu\frac{\partial \varphi_2}{\partial x_2}\right) ~,~~ M_{22} = -D\left(\frac{\partial \varphi_2}{\partial x_2}+\nu\frac{\partial \varphi_1}{\partial x_1}\right) ~,~~ M_{12} = -\frac{D(1-\nu)}{2}\left(\frac{\partial \varphi_1}{\partial x_2}+\frac{\partial \varphi_2}{\partial x_1}\right)$ and combining the three governing equations, we have $\frac{\partial^3\varphi_1}{\partial x_1^3}+\frac{\partial^3 \varphi_1}{\partial x_1\partial x_2^2} + \frac{\partial^3 \varphi_2}{\partial x_1^2 \partial x_2}+ \frac{\partial^3 \varphi_2}{\partial x_2^3}= -\frac{q}{D} \,.$ If we define $\mathcal{M} := D \left(\frac{\partial \varphi_1}{\partial x_1} + \frac{\partial \varphi_2}{\partial x_2}\right)$ we can write the above equation as $\nabla^2 \mathcal{M} = -q \,.$ Similarly, using the relationships between the shear force resultants and the deformations, and the equation for the balance of shear force resultants, we can show that $\kappa G h \left(\nabla^2 w^0 - \frac{\mathcal{M}}{D}\right) = -q \,.$ Since there are three unknowns in the problem, $\varphi_1$, $\varphi_2$, and $w^0$, we need a third equation which can be found by differentiating the expressions for the shear force resultants and the governing equations in terms of the moment resultants, and equating these. The resulting equation has the form $\nabla^2 \left(\frac{\partial \varphi_1}{\partial x_2} - \frac{\partial \varphi_2}{\partial x_1}\right) = -\frac{2\kappa G h}{D(1-\nu)}\left(\frac{\partial \varphi_1}{\partial x_2} - \frac{\partial \varphi_2}{\partial x_1}\right) \,.$ Therefore, the three governing equations in terms of the deformations are $\begin{align} &\nabla^2 \left(\frac{\partial \varphi_1}{\partial x_1} + \frac{\partial \varphi_2}{\partial x_2}\right) = -\frac{q}{D} \\ &\nabla^2 w^0 - \frac{\partial \varphi_1}{\partial x_1} - \frac{\partial \varphi_2}{\partial x_2} = -\frac{q}{\kappa G h} \\ &\nabla^2 \left(\frac{\partial \varphi_1}{\partial x_2} - \frac{\partial \varphi_2}{\partial x_1}\right) = -\frac{2\kappa G h}{D(1-\nu)}\left(\frac{\partial \varphi_1}{\partial x_2} - \frac{\partial \varphi_2}{\partial x_1}\right) \,. \end{align}$ The boundary conditions along the edges of a rectangular plate are $\begin{align} \text{simply supported} \quad & \quad w^0 = 0, M_{11} = 0 ~(\text{or}~M_{22} = 0), \varphi_1 = 0 ~(\text{or}~\varphi_2 = 0) \\ \text{clamped} \quad & \quad w^0 = 0, \varphi_1 = 0, \varphi_{2} = 0 \,. \end{align}$ Relationship to Reissner theory The canonical constitutive relations for shear deformation theories of isotropic plates can be expressed as[5][6] $\begin{align} M_{11} & = D\left[\mathcal{A}\left(\frac{\partial \varphi_1}{\partial x_1}+\nu\frac{\partial \varphi_2}{\partial x_2}\right) - (1-\mathcal{A})\left(\frac{\partial^2 w^0}{\partial x_1^2} + \nu\frac{\partial^2 w^0}{\partial x_2^2}\right)\right] + \frac{q}{1-\nu}\,\mathcal{B}\\ M_{22} & = D\left[\mathcal{A}\left(\frac{\partial \varphi_2}{\partial x_2}+\nu\frac{\partial \varphi_1}{\partial x_1}\right) - (1-\mathcal{A})\left(\frac{\partial^2 w^0}{\partial x_2^2} + \nu\frac{\partial^2 w^0}{\partial x_1^2}\right)\right] + \frac{q}{1-\nu}\,\mathcal{B}\\ M_{12} & = \frac{D(1-\nu)}{2}\left[\mathcal{A}\left(\frac{\partial \varphi_1}{\partial x_2}+\frac{\partial \varphi_2}{\partial x_1}\right) - 2(1-\mathcal{A})\,\frac{\partial^2 w^0}{\partial x_1 \partial x_2}\right] \\ Q_1 & = \mathcal{A} \kappa G h\left(\varphi_1 + \frac{\partial w^0}{\partial x_1}\right) \\ Q_2 & = \mathcal{A} \kappa G h\left(\varphi_2 + \frac{\partial w^0}{\partial x_2}\right) \,. \end{align}$ Note that the plate thickness is $h$ (and not $2h$) in the above equations and $D = Eh^3/[12(1-\nu^2)]$. If we define a Marcus moment, $\mathcal{M} = D\left[\mathcal{A}\left(\frac{\partial \varphi_1}{\partial x_1} + \frac{\partial \varphi_2}{\partial x_2}\right) - (1-\mathcal{A})\nabla^2 w^0\right] + \frac{2q}{1-\nu^2}\mathcal{B}$ we can express the shear resultants as $\begin{align} Q_1 & = \frac{\partial \mathcal{M}}{\partial x_1} + \frac{D(1-\nu)}{2}\left[\mathcal{A}\frac{\partial }{\partial x_2}\left(\frac{\partial \varphi_1}{\partial x_2} -\frac{\partial \varphi_2}{\partial x_1}\right)\right] - \frac{\mathcal{B}}{1+\nu}\frac{\partial q}{\partial x_1} \\ Q_2 & = \frac{\partial \mathcal{M}}{\partial x_2} - \frac{D(1-\nu)}{2}\left[\mathcal{A}\frac{\partial }{\partial x_1}\left(\frac{\partial \varphi_1}{\partial x_2} -\frac{\partial \varphi_2}{\partial x_1}\right)\right] - \frac{\mathcal{B}}{1+\nu}\frac{\partial q}{\partial x_2}\,. \end{align}$ These relations and the governing equations of equilibrium, when combined, lead to the following canonical equilibrium equations in terms of the generalized displacements. $\begin{align} & \nabla^2 \left(\mathcal{M} - \frac{\mathcal{B}}{1+\nu}\,q\right) = -q \\ & \kappa G h\left(\nabla^2 w^0 + \frac{\mathcal{M}}{D}\right) = -\left(1 - \cfrac{\mathcal{B} c^2}{1+\nu}\right)q \\ & \nabla^2 \left(\frac{\partial \varphi_1}{\partial x_2} - \frac{\partial \varphi_2}{\partial x_1}\right) = c^2\left(\frac{\partial \varphi_1}{\partial x_2} - \frac{\partial \varphi_2}{\partial x_1}\right) \end{align}$ where $c^2 = \frac{2\kappa G h}{D(1-\nu)} \,.$ In Mindlin's theory, $w^0$ is the transverse displacement of the mid-surface of the plate and the quantities $\varphi_1$ and $\varphi_2$ are the rotations of the mid-surface normal about the $x_2$ and $x_1$-axes, respectively. The canonical parameters for this theory are $\mathcal{A} = 1$ and $\mathcal{B} = 0$. The shear correction factor $\kappa$ usually has the value $5/6$. On the other hand, in Reissner's theory, $w^0$ is the weighted average transverse deflection while $\varphi_1$ and $\varphi_2$ are equivalent rotations which are not identical to those in Mindlin's theory. The canonical parameters for Reissner's theory are $\mathcal{A} = 1$, $\mathcal{B} = h^2\nu/10$, and $\kappa = 5/6$. Relationship to Kirchhoff-Love theory If we define the moment sum for Kirchhoff-Love theory as $\mathcal{M}^K := -D\nabla^2 w^K$ we can show that [5] $\mathcal{M} = \mathcal{M}^K + \frac{\mathcal{B}}{1+\nu}\,q + D \nabla^2 \Phi$ where $\Phi$ is a biharmonic function such that $\nabla^2 \nabla^2 \Phi = 0$. We can also show that, if $w^K$ is the displacement predicted for a Kirchhoff-Love plate, $w^0 = w^K + \frac{\mathcal{M}^K}{\kappa G h}\left(1 - \frac{\mathcal{B} c^2}{2}\right) - \Phi + \Psi$ where $\Psi$ is a function that satisfies the Laplace equation, $\nabla^2 \Psi = 0$. The rotations of the normal are related to the displacements of a Kirchhoff-Love plate by $\begin{align} \varphi_1 = - \frac{\partial w^K}{\partial x_1} - \frac{1}{\kappa G h}\left(1 - \frac{1}{\mathcal{A}} - \frac{\mathcal{B} c^2}{2}\right)Q_1^K + \frac{\partial }{\partial x_1}\left(\frac{D}{\kappa G h \mathcal{A}}\nabla^2 \Phi + \Phi - \Psi\right) + \frac{1}{c^2}\frac{\partial \Omega}{\partial x_2} \\ \varphi_2 = - \frac{\partial w^K}{\partial x_2} - \frac{1}{\kappa G h}\left(1 - \frac{1}{\mathcal{A}} - \frac{\mathcal{B} c^2}{2}\right)Q_2^K + \frac{\partial }{\partial x_2}\left(\frac{D}{\kappa G h \mathcal{A}}\nabla^2 \Phi + \Phi - \Psi\right) + \frac{1}{c^2}\frac{\partial \Omega}{\partial x_1} \end{align}$ where $Q_1^K = -D\frac{\partial }{\partial x_1}\left(\nabla^2 w^K\right) ~,~~ Q_2^K = -D\frac{\partial }{\partial x_2}\left(\nabla^2 w^K\right) ~,~~ \Omega := \frac{\partial \varphi_1}{\partial x_2} - \frac{\partial \varphi_2}{\partial x_1} \,.$ References 1. R. D. Mindlin, 1951, Influence of rotatory inertia and shear on flexural motions of isotropic, elastic plates, ASME Journal of Applied Mechanics, Vol. 18 pp. 31–38. 2. E. Reissner, 1945, The effect of transverse shear deformation on the bending of elastic plates, ASME Journal of Applied Mechanics, Vol. 12, pp. A68-77. 3. Wang, C. M., Lim, G. T., Reddy, J. N, Lee, K. H., 2001, Relationships between bending solutions of Reissner and Mindlin plate theories, Engineering Structures, vol. 23, pp. 838-849. 4. Reddy, J. N., 1999, Theory and analysis of elastic plates, Taylor and Francis, Philadelphia. 5. ^ a b Lim, G. T. and Reddy, J. N., 2003, On canonical bending relationships for plates, International Journal of Solids and Structures, vol. 40, pp. 3039-3067. 6. These equations use a slightly different sign convention than the preceding discussion.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 100, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8816781044006348, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/979?sort=votes
## Algorithm for finding the volume of a convex polytope ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It's easy to find the area of a convex polygon by division into triangles, but what is the optimal way of finding the volume of higher-dimensional convex bodies? I tried a few methods for dividing them into simplices, but gave it up and went with a Monte Carlo estimation scheme instead. Bonus question: How to find the surface area of those same convex bodies? EDIT: To answer David's question: the data set is a Voronoi tessellation of an n-dimensional volume (n usually 4) with a periodic boundary (like a torus). So I have the coordinates of the vertices of the convex bodies as well as the connectivity of all the facets, faces, etc. For the Monte Carlo I mentioned, I did convert everything to half-spaces, so I think that was not very difficult. - ## 8 Answers I think this problem is hard--the known algorithms are both slow and nontrivial to implement. See Exact Volume Computation for Polytopes for a survey. An interesting feature is that there are various algorithms which are well suited for different kinds of polytopes. As a practical answer, Qhull can compute volumes and surface areas. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As a follow-up to Barton's response: for hardness results, see I. Bárány & Z. Füredi, Computing the volume is difficult, Discrete and Computational Geometry, 1987. But there are polynomial time approximation schemes for volume of convex bodies independent of dimension, based on random walks within the body: see e.g. M. Dyer, A. Frieze, & R. Kannan, A random polynomial-time algorithm for approximating the volume of convex bodies, J. ACM 1991 and R. Kannan, L. Lovász, & M. Simonovits, Random walks and an O*(n^5) volume algorithm for convex bodies, Random structures and algorithms, 1997. Before getting to these answers, though, it's important to ask: how is your input represented? Is it a convex hull of a set of points, and all you know is the points? Is it an intersection of halfspaces? Is it given to you as an entire face lattice? Also, what range of dimensions do you care about The upper and lower bounds above are quite general but the dimension and input representation may still make a difference to the answer. - Matthias Beck and Dennis Pixton used almost 17 GigaHertz years to compute the value of the 10-dimensional Birkhoff polytope. Read about it, and find the answer, here. If you can get the same result quicker, I'm sure they'd be delighted to know how you did it. - Here's a fairly straightforward solution for polyhedra (3 dimensions), with running time O(v+ve), where v is the number of vertices and e is the number of edges. I suppose it could be extended to higher dimensions, but it would probably have much worse running time (I fear roughly exponential as in O(vn), where n is the number of dimensions). Let our polyhedron have n vertices, defined by their x,y,z coordinates: v1, v2, ..., vn and let the lowest point be v1 and the origin (modify the values for the others accordingly), and let it have e edges, defined by the vertices which they connect. Then, since we have coordinates for the vertices (as that is how we defined them), there must be a "ground level" plane p0 running through the x and z axes (the y-axis being height, and the ground never having an elevation). Then, let v2 be the point closest to the ground plane (shortest line perpendicular to the plane), and let v3 be the next closest, etc, through vn. Through each of the points v2 through vn, draw a plane perpendicular to the ground, and let them be numbered pm, where m is the subscript of the vertex through which it was drawn. Then, the volume of our polyhedron is equal to the sum of the volumes of the figures between the planes. We should have something resembling this: Let the heights between the segments be h1 through hn-1, where height hj is the height between planes pj and pj+1. Now, through each plane, we have a polygon (or more, if the figure is concave), whose vertices' coordinates can be calculated easily as follows: Let the edge that runs through the plane pj have endpoints va and vb. Then, the displacement vector is vb - va (assuming the coordinates of v are in vector-form), and the percentage travelled up is $\frac{h_j-h_a}{h_{b-1}-h_a}$. Multiply this by vb - va and add to va to calculate the new point of intersection for that edge: Intersection point = $(v_b-v_a)\frac{h_j-h_a}{h_{b-1}-h_a}+v_a$ The area of these polygons can be determined using triangles, or a simplification of this very process in just 2 dimensions. PlanetMath says that the volume of a prismatoid (which is the type of figure contained between sequential planes) is $h\frac{B_1 + B_2 + 4M}{6}$, where the Bs are the areas of the parallel polygons and M is the area of the midway polygon, which is exactly halfway between them (and parallel to them). Since we already know the area of each of the end polygons, and we can easily calculate the vertices of the midway polygon (using the previous paragraph's method), we can calculate the volume of the resulting prismatoids. Adding them up yields the total volume of the polyhedron. I suppose that the only real issue in this case, then, is, via code, determining which edges run through any particular plane, but if we were to actually look at it, we could tell very easily. A simpler version of this can be used to figure out the area of any polygon; simply draw lines through the vertices parallel to the x-axis and calculate the area of the resulting trapezoids as (b1+b2)/2 - There is a randomized algorithm with O(n^4) running time based on the simulated annealing using the hit-and-run convex body sampling algorithm along with isotropy approximation by Vempala and Lovasz: here. The running time is the number of oracle calls for membership in the convex body and has an implicit polylogarithmic term. - There is a result by J. Borwein et al on volumes of convex polytopes constructed by intersecting symmetric hyperplanes. These volumes are obtained from multivariate integrals involving the function sin(x)/x. These integrals have very nice closed forms. Here are two references on this work: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.40.8186 and http://algo.inria.fr/seminars/sem01-02/borwein1.pdf - Since you're given so much information about the faces of the polytope, I think that the division into simplices method ought to be straightforward (if tedious) to carry out. The idea is to divide the polytope into orthoschemes, with signed volumes, as described below. This has the advantage that the formula for the volume of an orthoscheme is very simple, and computing the coordinates of the orthoschemes is simple linear algebra. The disadvantage is that there will be one orthoscheme for each flag of the polytope, so the number of them could be quite large. Computing the surface area of the boundary could be done in tandem, since one would just take the area of each orthoschemes intersection with the supporting hyperplanes. I suspect this method won't work for you though, if the polytope is complicated (and maybe this is the approach you already tried). To compute the vertices of the orthoschemes, first choose a point, then take its orthogonal projection to each supporting hyperplane of the polytope. Inductively do this for each facet. The for each flag, take the corresponding points in each face of the flag, and form an orthoscheme. The sign of the volume of the orthoscheme will be determined in each dimension by whether the vertex lies inside or outside the corresponding hyperplane times the sign of the lower dimensional one it is a cone on. Some of the orthoschemes will lie partly outside of the polytope, but the volumes outside will cancel with this sign convention. - As documented above the paper by Bueler, Enge and Fukuda is a good place to start and give many techniques for computing volumes of polytopes. In my experience using lrs to build the triangulation is a very good method. If your polytope has symmetries, then it is possible to use them to reduce the size of the computation. See at link text for the relevant programs. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330219626426697, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/90149-symmetric-matrices.html
# Thread: 1. ## symmetric matrices ,, Let Sym(n) be the vector space of symmetric nⅹn matrices, and LT(n) the vector space of lower-triangular nⅹn matrices. Define F : LT(n) → Sym(n) by F(A) = ATA, where AT is the transpose of A. Show that there exists an open set U about the identity matrix in LT(n) and an open set V about the identity matrix in Sym(n), such that for each symmetric matrix B ∈ V there is a unique lower triangular matrix A ∈ U such that F(A) = B. could anyone help me to solve this problem ?? 2. Originally Posted by jin_nzzang Let Sym(n) be the vector space of symmetric nⅹn matrices, and LT(n) the vector space of lower-triangular nⅹn matrices. Define F : LT(n) → Sym(n) by F(A) = ATA, where AT is the transpose of A. Show that there exists an open set U about the identity matrix in LT(n) and an open set V about the identity matrix in Sym(n), such that for each symmetric matrix B ∈ V there is a unique lower triangular matrix A ∈ U such that F(A) = B. could anyone help me to solve this problem ?? Let U be the set of matrices in LT(n) with strictly positive elements on the diagonal, and let V be the set of positive definite matrices in Sym(n). Then U and V are open sets containing the identity. The map $F(A) = A^{\,\textsc{t}}A$ clearly takes U into V. Its inverse is the Cholesky decomposition, and there is a theorem (proved here) which states that this takes V to U.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8849302530288696, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/47663/hypergraph-chromatic-number-vs-degree-clique-size/52994
## Hypergraph Chromatic Number vs Degree, Clique-Size ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For a hypergraph let $\chi$ be the least number of colours needed to colour the vertices, so that in each edge, each colour is used at most once (i.e., the strong chromatic number). Let $\Delta$ be the maximum number of hyperedges containing any vertex. Let $\omega$ be the maximum size of a clique, meaning a vertex set such that for every pair of vertices in the clique, some edge contains both. Question: is there $\epsilon>0$ so that $\chi \le \Delta \omega / (1+\epsilon)$ in all hypergraphs? Motive: let $R$ be the maximum edge size. A simple greedy algorithm for colouring can be used to establish that $\chi \le 1 + \Delta(R-1)$, and this bound cannot be improved in general. Note $\chi \ge \omega \ge R$; so I am essentially asking if $\omega$ approximates $\chi$ more closely than $R$. - ## 1 Answer Very nice question, Dave. I had a look at it with Matej Stehlik, here in Grenoble. We found a way to give a positive answer to your question, although it uses more heavy machinery than we would like. You would expect there is an easy argument, but if it exists, we haven't found it yet. Given a hypergraph $H$, form a graph $G$ by replacing all hyperedges by cliques. Than $\chi(G)=\chi$, $\omega(G)=\omega$ and $\Delta(G)\le\Delta(R-1)$. Bruce Reed, in his paper "$\omega$, $\Delta$ and $\chi$", J. Graph Theory 27 (1998) 177-212, proves that if $a=1/140000000$, then for $\Delta(G)$ large enough, we have $\chi(G)\le a\omega(G)+(1-a)(\Delta(G)+1)$. So for the hypergraph we have $$\chi\le a\omega+(1-a)(\Delta(R-1)+1)\le a\omega+(1-a)\Delta R\le a\omega+(1-a)\Delta\omega=(1-a+a/\Delta)\cdot\Delta\omega.$$ So if $\Delta\ge2$ (and $\Delta(R-1)$ large enough to allow Bruce's theorem), we get $\chi\le(1-a/2)\Delta\omega$. I would be interested to hear about better bounds or more elementary approaches (Bruce's proof is not for the faint of heart). - That's great! That totally answers the original question as far as I can see, assuming $\Delta, R \ge 2$ (and I notice now my conjecture was false for $\Delta = 1$), since for bounded $\Delta(R-1)$ we must have that $\Delta$ and $R$ are bounded, in which case $\Delta(R-1)+1$ is indeed at most $\Delta\omega/(1+\epsilon)$. – Dave Pritchard Jan 24 2011 at 19:10 Just for the record, Bruce and I now have an easier proof of this theorem posted on arXiv, but it still uses a combination of structural and probabilistic arguments. – Andrew D. King Jan 15 at 13:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385630488395691, "perplexity_flag": "head"}
http://all-science-fair-projects.com/science_fair_projects_encyclopedia/Matrix_(mathematics)
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Matrix (mathematics) For the square matrix section, see square matrix. In mathematics, a matrix (plural matrices) is a rectangular table of numbers or, more generally, of elements of a ring-like algebraic structure. In this article, the entries of a matrix are real or complex numbers unless otherwise noted. Matrices are useful to record data that depend on two categories, and to keep track of the coefficients of systems of linear equations and linear transformations. For the development and applications of matrices, see matrix theory. Contents ## Definitions and notations The horizontal lines in a matrix are called rows and the vertical lines are called columns. A matrix with m rows and n columns is called an m-by-n matrix (or m×n matrix) and m and n are called its dimensions. The entry of a matrix A that lies in the i-th row and the j-th column is called the i,j entry or (i,j)-th entry of A. This is written as A[i,j] or Ai,j, or in notation of the C programming language, A[i][j]. We often write $A:=(a_{i,j})_{m \times n}$ to define an m × n matrix A with each entry in the matrix A[i,j] called aij for all 0 ≤ i < m and 0 ≤ j < n. ### Example The matrix $\begin{bmatrix} 1 & 2 & 3 \\ 1 & 2 & 7 \\ 4&9&2 \\ 6&1&5\end{bmatrix}$ is a 4×3 matrix. The element A[2,3] or a2,3 is 7. ## Adding and multiplying matrices ### Sum If two m-by-n matrices A and B are given, we may define their sum A + B as the m-by-n matrix computed by adding corresponding elements, i.e., (A + B)[i, j] = A[i, j] + B[i, j]. For example $\begin{bmatrix} 1 & 3 & 2 \\ 1 & 0 & 0 \\ 1 & 2 & 2 \end{bmatrix} + \begin{bmatrix} 0 & 0 & 5 \\ 7 & 5 & 0 \\ 2 & 1 & 1 \end{bmatrix} = \begin{bmatrix} 1+0 & 3+0 & 2+5 \\ 1+7 & 0+5 & 0+0 \\ 1+2 & 2+1 & 2+1 \end{bmatrix} = \begin{bmatrix} 1 & 3 & 7 \\ 8 & 5 & 0 \\ 3 & 3 & 3 \end{bmatrix}$ Another, much less often used notion of matrix addition can be found at Direct sum (Matrix). ### Scalar multiplication If a matrix A and a number c are given, we may define the scalar multiplication cA by (cA)[i, j] = cA[i, j]. For example $2 \begin{bmatrix} 1 & 8 & -3 \\ 4 & -2 & 5 \end{bmatrix} = \begin{bmatrix} 2\times 1 & 2\times 8 & 2\times -3 \\ 2\times 4 & 2\times -2 & 2\times 5 \end{bmatrix} = \begin{bmatrix} 2 & 16 & -6 \\ 8 & -4 & 10 \end{bmatrix}$ These two operations turn the set M(m, n, R) of all m-by-n matrices with real entries into a real vector space of dimension mn. ### Multiplication Main article: Matrix multiplication Multiplication of two matrices is well-defined only if the number of columns of the first matrix is the same as the number of rows of the second matrix. If A is an m-by-n matrix (m rows, n columns) and B is an n-by-p matrix (n rows, p columns), then their product AB is the m-by-p matrix (m rows, p columns) given by (AB)[i, j] = A[i, 1] * B[1, j] + A[i, 2] * B[2, j] + ... + A[i, n] * B[n, j] for each pair i and j. For instance $\begin{bmatrix} 1 & 0 & 2 \\ -1 & 3 & 1 \\ \end{bmatrix} \times \begin{bmatrix} 3 & 1 \\ 2 & 1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} (1 \times 3 + 0 \times 2 + 2 \times 1) & (1 \times 1 + 0 \times 1 + 2 \times 0) \\ (-1 \times 3 + 3 \times 2 + 1 \times 1) & (-1 \times 1 + 3 \times 1 + 1 \times 0) \\ \end{bmatrix} = \begin{bmatrix} 5 & 1 \\ 4 & 2 \\ \end{bmatrix}$ This multiplication has the following properties: • (AB)C = A(BC) for all k-by-m matrices A, m-by-n matrices B and n-by-p matrices C ("associativity"). • (A + B)C = AC + BC for all m-by-n matrices A and B and n-by-k matrices C ("distributivity"). • C(A + B) = CA + CB for all m-by-n matrices A and B and k-by-m matrices C ("distributivity"). It is important to note that commutativity does not generally hold; that is, given matrices A and B and their product defined, then generally AB ≠ BA. Matrices are said to anticommute if AB = -BA. Such matrices are very important in representations of Lie algebras and in Representations of Clifford algebras ## Linear transformations, ranks and transpose Matrices can conveniently represent linear transformations because matrix multiplication neatly corresponds to the composition of maps, as will be described next. Here and in the sequel we identify Rn with the set of "rows" or n-by-1 matrices. For every linear map f : Rn -> Rm there exists a unique m-by-n matrix A such that f(x) = Ax for all x in Rn. We say that the matrix A "represents" the linear map f. Now if the k-by-m matrix B represents another linear map g : Rm -> Rk, then the linear map g o f is represented by BA. This follows from the above-mentioned associativity of matrix multiplication. The rank of a matrix A is the dimension of the image of the linear map represented by A; this is the same as the dimension of the space generated by the rows of A, and also the same as the dimension of the space generated by the columns of A. The transpose of an m-by-n matrix A is the n-by-m matrix Atr (also sometimes written as AT or tA) gotten by turning rows into columns and columns into rows, i.e. Atr[i, j] = A[j, i] for all indices i and j. If A describes a linear map with respect to two bases, then the matrix Atr describes the transpose of the linear map with respect to the dual bases, see dual space. We have (A + B)tr = Atr + Btr and (AB)tr = Btr * Atr. ## Square matrices and related definitions A square matrix is a matrix which has the same number of rows as columns. The set of all square n-by-n matrices, together with matrix addition and matrix multiplication is a ring. Unless n = 1, this ring is not commutative. M(n, R), the ring of real square matrices, is a real unitary associative algebra. M(n, C), the ring of complex square matrices, is a complex associative algebra. The unit matrix or identity matrix In, with elements on the main diagonal set to 1 and all other elements set to 0, satisfies MIn=M and InN=N for any m-by-n matrix M and n-by-k matrix N. For example, if n = 3: $I_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$ The identity matrix is the identity element in the ring of square matrices. Invertible elements in this ring are called invertible matrices or non-singular matrices. An n by n matrix A is invertible if and only if there exists a matrix B such that AB = In ( = BA). In this case, B is the inverse matrix of A, denoted by A−1. The set of all invertible n-by-n matrices forms a group (specifically a Lie group) under matrix multiplication, the general linear group. If λ is a number and v is a non-zero vector such that Av = λv, then we call v an eigenvector of A and λ the associated eigenvalue. (Eigen means "own" in German.) The number λ is an eigenvalue of A if and only if A−λIn is not invertible, which happens if and only if pA(λ) = 0. Here pA(x) is the characteristic polynomial of A. This is a polynomial of degree n and has therefore n complex roots (counting multiple roots according to their multiplicity). In this sense, every square matrix has n complex eigenvalues. The determinant of a square matrix A is the product of its n eigenvalues, but it can also be defined by the Leibniz formula. Invertible matrices are precisely those matrices with nonzero determinant. The Gauss-Jordan elimination algorithm is of central importance: it can be used to compute determinants, ranks and inverses of matrices and to solve systems of linear equations. The trace of a square matrix is the sum of its diagonal entries, which equals the sum of its n eigenvalues. Every orthogonal matrix is a square matrix. ## Special types of matrices In many areas in mathematics, matrices with certain structure arise. A few important examples are • Symmetric matrices are such that elements symmetric to the main diagonal (from the upper left to the lower right) are equal, that is, ai,j=aj,i. • Hermitian (or self-adjoint) matrices are such that elements symmetric to the diagonal are each others complex conjugates, that is, ai,j=a*j,i, where the superscript '*' signifies complex conjugation. • Toeplitz matrices have common elements on their diagonals, that is, ai,j=ai+1,j+1. • Stochastic matrices are square matrices whose columns are probability vectors; they are used to define Markov chains. For a more extensive list see list of matrices. ## Matrices in abstract algebra If we start with a ring R, we can consider the set M(m,n, R) of all m by n matrices with entries in R. Addition and multiplication of these matrices can be defined as in the case of real or complex matrices (see below). The set M(n, R) of all square n by n matrices over R is a ring in its own right, isomorphic to the endomorphism ring of the left R-module Rn. Similarly, if the entries are taken from a semiring S, matrix addition and multiplication can still be defined as usual. The set of all square n×n matrices over S is itself a semiring. Note that fast matrix multiplication algorithms such as the Strassen algorithm generally only apply to matrices over rings and will not work for matrices over semirings that are not rings. If R is a commutative ring, then M(n, R) is a unitary associative algebra over R. It is then also meaningful to define the determinant of square matrices using the Leibniz formula; a matrix is invertible if and only if its determinant is invertible in R. All statements mentioned in this articles for real or complex matrices remain correct for matrices over an arbitrary field. Matrices over a polynomial ring are important in the study of control theory. ## History The study of matrices is quite old. Latin squares and magic squares have been studied since prehistoric times. Matrices have a long history of application in solving linear equations. Leibniz, one of the two founders of calculus, developed the theory of determinants in 1693. Cramer developed the theory further, presenting Cramer's rule in 1750. Carl Friedrich Gauss and Wilhelm Jordan developed Gauss-Jordan elimination in the 1800s. The term "matrix" was first coined in 1848 by J. J. Sylvester. Cayley, Hamilton, Grassmann, Frobenius and von Neumann are among the famous mathematicians who have worked on matrix theory. Olga Taussky Todd (1906-1995) started to use matrix theory when investigating an aerodynamic phenomenon called flutter, during WWII. ## Further reading A more advanced article on matrices is matrix theory. ## External links • Matrix name and history: very brief overview • WIMS Matrix Calculator computes determinant, rank, inverse etc. online. 03-10-2013 05:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8809689879417419, "perplexity_flag": "middle"}
http://nrich.maths.org/7415
### Telescoping Series Find $S_r = 1^r + 2^r + 3^r + ... + n^r$ where r is any fixed positive integer in terms of $S_1, S_2, ... S_{r-1}$. ### Incircles The incircles of 3, 4, 5 and of 5, 12, 13 right angled triangles have radii 1 and 2 units respectively. What about triangles with an inradius of 3, 4 or 5 or ...? ### Cushion Ball The shortest path between any two points on a snooker table is the straight line between them but what if the ball must bounce off one wall, or 2 walls, or 3 walls? # Weekly Challenge 49: Get in Line ##### Stage: 4 and 5 Challenge Level: When measurements are made in science it often makes sense to plot the resulting pairs of values on a chart and join the points with a smooth curve or a line. This allows us to make predictions of measurements in future experiments or at values other than those that we have measured. Often we can also make a mathematical model in which we suggest that the process will be described by an mathematical equation. Sometimes the equation comes entirely from theory; often an experiment will be needed to work our the numerical values of the constants in these equations. In this problem, $9$ physical situations, their proposed equations and graphs have been mixed up and shown below. Which can you match up? What is the interpretation of the variables $x$ and $y$ in each case for the equations? What units would be needed to label the axes in a reasonably accurate way? Can you identify the physical interpretation of three key points on each of the graphs? Processes 1. A cup of tea is made and the temperature measured in degrees Celsius every second. What would the temperature-time graph look like? 2. The pendulum of a grandfather clock swings to and fro and the angle of the bob from the vertical is measured every 100th of a second. What would the angle-time graphs look like? 3. I throw a tennis ball straight up into the air and catch it. The height of the ball from the ground is measured over the time of the journey using freeze-frame photography. What would the height-time graph look like? 4. I measure several objects using inches and then using metres. What would the inches-metres graph look like? 5. I jump out of a plane and the distance fallen from the plane is measured every $0.1$ second until I open my parachute. What would the distance-time graph look like whilst in free fall? 6. I drive at 70 miles an hour along the motorway and note the reading on my milometer every 5 minutes. What would the plot of milometer v minutes passes graph look like? 7. I bring two magnets of the same polarity together directly in a line in a sequence of steps. Starting from 1 metre, I halve the remaining distance each time and measure the force felt between the magnets. What would the force-distance graph look like? 8. I blow up a roughly spherical balloon using a balloon pump. After each pump I measure the radius of the balloon in centimetres. What would the number-of-pumps vs radius graph look like? 9. I suck water through a straw out of a large beaker at a constant rate and measure the volume of liquid remaining at various times. What would the volume-time graph look like? Line Graphs Equations In each of these equations, $A$, $B$ and $C$ are constants which you might be able to determine theoretically, or which might need an experiment to determine. In each case $x$ and $y$ are the variables which are being measured. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436150193214417, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/114818/bounding-derivative-of-a-function
## Bounding derivative of a function ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider $a(t)\in\mathbf{L}^{2}(\mathbb{R})$ and $a(t)>0$, is a low pass smooth function with $\hat{a}(f)=0, |f|>f_{max}$. Can we have a upper bound on the following, $\Big|\frac{a'(t)}{a(t)}\Big|$? Using Bernstein's theorem we can upper bound $|a'(t)|$ alone based on $f_{max}$ but how can we upper bound the ratio mentioned here. Any suggestions for it. - Please edit your question for English and math. What is $a(t)$? is this the same as $x(t)$? What sort of estimate do you want? $x(t)$ can be zero at some points where $x'(t)/x(t)=\infty$. – Alexandre Eremenko Nov 28 at 22:42 Thanks Alexandre, I have corrected the notation and the question. – Neeks Nov 28 at 22:51 You did not tell me what sort of estimate you want. There is no uniform estimate, of course: $a$ can have complex zeros as close as you wish to the real line, condition $a>0$ does not help, and $a'/a$ can be arbitrarily large at some points. – Alexandre Eremenko Nov 29 at 2:07 In the case I am dealing with, only real zeros of $a(t)$ are of interest. But you brought out interesting possibility. Now taking a simple example, $a(t)=1+\mu\sin\omega t$, with $0<\mu<1$ we have,\newline $\Big|\frac{a'(t)}{a(t)}\Big|=\frac{\mu\omega\cos\omega t}{1+\mu\sin\omega t}\leq\frac{\mu\omega}{\sqrt{1-\mu^2}}, \forall t$. So above it can be bounded, though a simple example. Can we have a bound for sum of harmonic sinusoids and generalize it. But polynomials are also entire function and hence band-limited but the bandwidth is too large for it. Please correct me. – Neeks Nov 29 at 6:24 ## 1 Answer $a(t)=\cos(t+iϵ)\cos(t−iϵ)$ is a low pass signal and $a^\prime/a$ can be as large as you wish at the point $t=\pi/2,$ if $\epsilon$ is sufficiently small. - Please see my above comment. – Neeks Nov 29 at 6:26 1. What "real zeros" are you talking about if one of your conditions is that $a(t)>0$ ? 2. Example which I gave shows that in general there is NO upper bound for $a'/a$ under your conditions. What else are you asking? – Alexandre Eremenko Nov 29 at 23:38 sorry for the delayed reply. I had missed the comment. Under what further conditions on a(t) can we have an upper bound on the quantity mentioned? For example, in the example i gave we have a bound dependent on ω – Neeks Dec 5 at 17:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440813064575195, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?s=61b80a8a8aba605dc93107c660f7f71a&p=4211370
Physics Forums ## Expressing sol. of Poisson eqn. in terms of vol. and sur. integrals. Hi, Referring to Jackson's Electrodynamics 3ed, page 197, line 5. He assumes that the magnetization can be divided into volume part and surface part, thus generating eqn 5.100. This is fine. In a straightforward way, I wanted to do the same but for electrostatics, eqn 4.32: ∅= (1/4πε) ∫dv [ρ-∇.P]/[|r-r'|] (integral is over all space). So is this correct: ∅= (1/4πε) ∫dv [ρ-∇.P]/[|r-r'|] + (1/4πε) ∫da [ρ-P.n]/[|r-r'|] (1st is a vol. integral, 2nd is a sur. integral) Note that the first term in the second eqn. is not the same as 4.32, in 4.32 integration is over all spave (which includes vol. and sur), while the first term in the second eqn. is over vol only. Thanks in advance. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Why I am asking this is because Jackson provide: eqn 5.97 then "generalize" it to 5.100 eqn 5.102 then "generalize" it to 5.102 eqn 4.32....nut no generalization... So I am seeking the "missing link", although I understand that it "may" be purely academic and not useful in practice. you can transform the second term in 4.32 by using divergence theorem to a surface term but you have to be careful. ## Expressing sol. of Poisson eqn. in terms of vol. and sur. integrals. So is my equation correct? No,I am saying that ∇.p term which is with vol. element can be written in term of surface integral by using divergence theorem but there is an extra 1/|x-x'| factor with which you have to be careful. Sir, my question is: is my equation above correct (y/n). No,it is not. Why? Or, what is the correction, then? Recognitions: Science Advisor I don't really understand your problem, but let's try. It's always good to go back to the microscopic Maxwell equations to understand what's going on physically. For electrostatics those read $$\vec{\nabla} \cdot \vec{E}=\rho, \quad \vec{\nabla} \times \vec{E}=0.$$ I use Heaviside-Lorentz (i.e., rationalized Gauß units), which is the physically most convenient system of units (without the conversion factors as in the SI). Now, one applies this to the macroscopic matter. For a non-conducting dielectric without any external charges and electromagnetic fields you have an overall neutral medium, consisting of positively charged atomic nuclei and bound electrons. When you now apply an electrostatic field, these electrons are not free to move, because we assume a non-conducting medium. For small deviations from their stationary state you can assume that they are harmonically bound and thus the electrons are slightly shifted according to the applied field. This results in the polarization of the material. Thus the total electric field consists of the applied field and the polarization: $$\vec{E}=\vec{E}_{\text{ext}}-\vec{P}.$$ The sign of the polarization $\vec{P}$ is convention (in the analogous case of magnetics its opposite for historical reasons). Taking the divergence gives $$\vec{\nabla} \cdot \vec{E}=\rho_{\text{ext}}-\vec{\nabla} \cdot \vec{P}=\rho.$$ Here, $\rho_{\text{ext}}$ is the charge distribution used to create the external electric field and $\rho$ is the total (microscopic) charge distribution. Since for not too strong external fields we can assume the binding of the electrons to be harmonic, we can assume that the polarization is proportional to the total electric field, i.e., $$\vec{P}=\chi \vec{E}.$$ Conventionally one sets $\vec{E}_{\text{ext}}=\vec{D}=(1+\chi) \vec{E}=\epsilon \vec{E}$ and calls $\epsilon$ the dielectric constant. Back to your question. For the total electric field we have $$\vec{\nabla} \times \vec{E}=0$$ This implies that you can express the electric field as the gradient of a scalar potential: $$\vec{E}=-\vec{\nabla} \Phi.$$ Gauß's Law then implies $$-\Delta \Phi=\rho.$$ Thus we simply need the Green's function of the Laplacian to get $$\Phi(\vec{x})=\int_V \mathrm{d}^3 \vec{x}' \frac{\rho(\vec{x}')}{4 \pi |\vec{x}-\vec{x}'|}.$$ Here we assumed for convienience that the charge density is bounded to a finite region $V$. Now let's investigate the part of the electric field coming from the polarization. Here we need the expression $$\frac{\vec{\nabla}' \cdot \vec{P}(\vec{x}')}{|\vec{x}-\vec{x}'|}=\vec \nabla' \cdot \left ( \frac{\vec{P}(\vec{x}')}{|\vec{x}-\vec{x}'|} \right )-\vec{P}(\vec{x}') \cdot \vec{\nabla}' \frac{1}{|\vec{x}-\vec{x}'|}.$$ Using also $$\vec{\nabla}' \frac{1}{|\vec{x}-\vec{x}'|}=+\frac{\vec{x}-\vec{x}'}{|\vec{x}-\vec{x}'|^3}$$ we finally get, using Gauß's integral theorem $$\Phi(\vec{x})=\int_V \mathrm{d}^3 \vec{x}' \left (\frac{\rho_{\text{ext}}(\vec{x}')}{4\pi |\vec{x}-\vec{x}'|} + \frac{\vec{P}(\vec{x}') \cdot (\vec{x}-\vec{x}')}{4 \pi |\vec{x}-\vec{x}'|^3} \right ) - \int_{\partial V} \mathrm{d} \vec{A}' \cdot \frac{\vec{P}(\vec{x}')}{4 \pi |\vec{x}-\vec{x}'|}.$$ The second term vanishes, of course, if the volume encloses all charges since then $P(\vec{x}')=0$ along the boundary of the volume. If you just take a finite dielectric body as the volume, this term is important since it describes the field caused by the surface charge due to the polarization. The physical interpretation of the above equation is clear: The electric field consists of the original one (given by the integral over the term proportional to the external charge distribution $\rho_{\text{ext}}(\vec{x}')$), the sum over all dipol fields from the polarization of the medium, $\propto \vec{P}(\vec{x}')$, and finally the above surface term (if applicable) from the surface charges due to the polarization of the medium. This explains the physics of dielectrics quite clearly, but in practice these considerations are not very useful, because you don't know the polarization beforehand but you have to determine it "self consistently". That's why one rather solves the macroscopic Maxwell equations $$\vec{\nabla} \cdot \vec{D}=\rho_{\text{ext}}, \quad \vec{\nabla} \times \vec{E}=0, \quad \vec{D}=\epsilon \vec{E},$$ which also imply appropriate boundary conditions at surfaces between different kinds of matter (e.g., the boundary of the dielectric to vacuum/air, etc.). Together with these boundary conditions you can solve the problem for $\vec{D}$ and $\vec{E}$ uniquely and afterward can calculate the polarization, using $\vec{P}=\chi \vec{E}=(\epsilon-1) \vec{E}=\vec{D}-\vec{E}$. Thank you very much. Thread Tools | | | | |-------------------------------------------------------------------------------------------|----------------------------------|---------| | Similar Threads for: Expressing sol. of Poisson eqn. in terms of vol. and sur. integrals. | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 1 | | | Precalculus Mathematics Homework | 19 | | | Introductory Physics Homework | 4 | | | Special & General Relativity | 29 | | | Precalculus Mathematics Homework | 11 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9084775447845459, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/210564-solving-probability-bernoulli-urn-expectation-fraction-balls.html
# Thread: 1. ## Solving Probability of Bernoulli Urn with the Expectation of the Fraction of Balls... Hello, I’m an undergraduate medical student (i.e, no mathematics education past high school), and I’m having difficulty understanding a concept in ET Jayne’s Probability Theory: The Logic of Science. On pp.63-67 2ed., he discusses a Bernoulli urn with N = 4 balls, M = 2 red ones (and 2 white, N-M), of which we must randomly draw n= 3. The balls are not replaced. (Let this proposition ≣ B.) He asks, how does knowledge that a red ball will be drawn on the second (R2) or third (R3) draw affect the probability of drawing a red ball on the first (R1)? He reveals the surprising (and awesome!) revelation that P(R1 | R2 + R3,B) > P(R1 | R2 B). I understand his intuitive explanation for it, but not his formal. He summarises thusly: “... when the fraction F = M/N of red balls is known, then the Bernoulli urn rule applies, and P(R1 | B) = F. When F is unknown, the probability for red is the expectation of F: P(R1 | B) = <F> ≣ E(F). If M and N are both unknown, the expectation is over the joint probability distribution for M and N.” I tried calculating E(F), (as I assume the second and not the third scenario applies here), but arrived to an erroneous result. In the intuitive working, he shows P(R1 | R2 + R3,B) = (4/5) / 2, calling the numerator ‘effective M’ and the denominator is ‘N - 2’. In his formal explanation, he states that ‘effective M’ is the Expected value of M, E(M). So, I don’t know have that fits into finding the Expected value of F. Basically, I don’t know where the N term fits into it all. Any help would be much appreciated; if I have been too unclear I will make screenshots of the pages. 2. ## Re: Solving Probability of Bernoulli Urn with the Expectation of the Fraction of Ball Originally Posted by Gazmann Hello, I’m an undergraduate medical student (i.e, no mathematics education past high school), and I’m having difficulty understanding a concept in ET Jayne’s Probability Theory: The Logic of Science. On pp.63-67 2ed., he discusses a Bernoulli urn with N = 4 balls, M = 2 red ones (and 2 white, N-M), of which we must randomly draw n= 3. The balls are not replaced. (Let this proposition ≣ B.) He asks, how does knowledge that a red ball will be drawn on the second (R2) or third (R3) draw affect the probability of drawing a red ball on the first (R1)? He reveals the surprising (and awesome!) revelation that P(R1 | R2 + R3,B) > P(R1 | R2 B). I understand his intuitive explanation for it, but not his formal. He summarises thusly: “... when the fraction F = M/N of red balls is known, then the Bernoulli urn rule applies, and P(R1 | B) = F. When F is unknown, the probability for red is the expectation of F: P(R1 | B) = <F> ≣ E(F). If M and N are both unknown, the expectation is over the joint probability distribution for M and N.” I tried calculating E(F), (as I assume the second and not the third scenario applies here), but arrived to an erroneous result. In the intuitive working, he shows P(R1 | R2 + R3,B) = (4/5) / 2, calling the numerator ‘effective M’ and the denominator is ‘N - 2’. In his formal explanation, he states that ‘effective M’ is the Expected value of M, E(M). So, I don’t know have that fits into finding the Expected value of F. Basically, I don’t know where the N term fits into it all. Any help would be much appreciated; if I have been too unclear I will make screenshots of the pages. Frankly I do not follow the text. But the table below gives all possible outcomes. We are looking for $\mathcal{P}(R_1|R_2\cup R_3).$ $\begin{array}{*{20}c} I &\vline & {II} &\vline & {III} \\\hline R &\vline & R &\vline & W \\ R &\vline & W &\vline & R \\ R &\vline & W &\vline & W \\ W &\vline & R &\vline & R \\ W &\vline & R &\vline & W \\ W &\vline & W &\vline & R \\\end{array}$ Of those six we concentrate on the rows which have an R in the second or third column. That is the given part. There are only five of those. Of those five, only two also have an R in the first column. Thus $\mathcal{P}(R_1|R_2\cup R_3)=\frac{2}{5}~.$ 3. ## Re: Solving Probability of Bernoulli Urn with the Expectation of the Fraction of Ball Thank you Plato, that part I understand Here is the section in question, imgur: the simple image sharer . Referring to the text, my confusion lies with reconciling Equation 3.63 with the paragraph below Equation 3.71 (explaining Eq.3.63 in a more 'cogent' way...). It mostly lies in the fact I don't understand how to find the Expectation of the Fraction, M/N. (I understand finding the Expectation of M, but not when N is involved.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9263715744018555, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/236724/tips-on-studying-the-convergence-of-this-series?answertab=oldest
# Tips on studying the convergence of this series Can someone please offer any tips on how to manipulate the series below, or what convergence test I should use to determine its convergence based on $\alpha$? $$\sum_{n\ge1} \frac{\sqrt[3]{n+1}+\sqrt[3]{n}}{n^\alpha},\ \alpha \in \mathbb{R}$$ - ## 1 Answer Hint: It is easy to see that the numerator is $\gt 2n^{1/3}$. And since $n+1\le 2n$, the numerator is $\le (1+2^{1/3})n^{1/3}$. These inequalities will be sufficient for comparison tests. We really don't need such tight bounds. Any positive constants $a$ and $b$ such that $an^{1/3}\le (n+1)^{1/3}+n^{1/3}\le bn^{1/3}$ are good enough for the comparisons. Alternately, you could divide top and bottom by $n^{1/3}$. On top you get $\sqrt[3]{1+\frac{1}{n}}+1$, and at the bottom you get $n^{\alpha-1/3}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429324865341187, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/measurement?sort=faq&pagesize=30
# Tagged Questions The measurement tag has no wiki summary. 13answers 7k views ### Home experiments to derive the speed of light? Are there any experiments I can do to derive the speed of light with only common household tools? 2answers 205 views ### Measuring extra-dimensions I have read and heard in a number of places that extra dimension might be as big as $x$ mm. What I'm wondering is the following: How is length assigned to these extra dimensions? I mean you can ... 5answers 2k views ### What if the universe is rotating as a whole? Suppose in the milliseconds after the big bang the cosmic egg had aquired some large angular momentum. As it expanded, keeping the momentum constant (not external forces) the rate of rotation would ... 1answer 843 views ### How to combine the error of two independent measurements of the same quantity? I have measured $k_1$ and $k_2$ in two measurements and then I calculated $\Delta k_1$ and $\Delta k_2$. Now I want to calculate $k$ and $\Delta k$. $k$ is just the mean of $k_1$ and $k_2$. I thought ... 1answer 364 views ### About binary stars and calculating velocity, period and radius of their orbit I saw somewhere about being able to measure the velocity, period and radius of a binary star orbit by looking at red shift and blue shift. I understand it but can someone give me an example of ... 4answers 652 views ### Energy Measurements in a Two Fermion Double Well System This question is related but my question here is much more elementary than discussions of the Pauli principle across the universe. There has been a fair amount of discussion around at the moment on ... 1answer 144 views ### Photometer: measured Irradiance L converted to photon rate I am conducting an experiment in which the power meter reading of $410\,nm$ narrow bandpass stimulus is noted to be 30 $\frac{\mu W}{cm^2}$ at a distance of 1 inch away from the light source. I wish ... 4answers 649 views ### Measuring the speed of light and defining the metre - absolute or relative? If the metre is now defined as the distance light travels in vacuum in 1⁄299,792,458th of a second and the speed of light is accepted to be ... 1answer 174 views ### Quantum Zeno effect and unstable particles Is it possible to increase indefinitely the lifetime of unstable particles by applying the quantum Zeno effect? Is there a bound from theoretical principles about the maximum extension one can get in ... 1answer 1k views ### How did they measure the speed of light observing Jupiter's moons, centuries ago? I am interested in the pratical method and I like to discover if it is cheap enough to be done as an experiment in a high school. Thank you. 11answers 749 views ### Is it possible for a physical object to have a irrational length? Suppose I have a caliper that is infinitely precise. Also suppose that this caliper returns not a number, but rather whether the precise length is rational or irrational. If I were to use this ... 3answers 523 views ### How hot is the water in the pot? Question: How hot is the water in the pot? More precisely speaking, how can I get a temperature of the water as a function of time a priori? Background & My attempt: Recently I started spend ... 3answers 844 views ### How to combine measurement error with statistic error We have to measure a period of an oscillation. We are to take the time it takes for 50 oscillations multiple times. I know that I will have a $\Delta t = 0.1 \, \mathrm s$ because of my reaction ... 10answers 1k views ### What is the wavefunction of the observer himself? I am aware about different interpretations of quantum mechanics out there but would mostly like to see an answer from the perspective of Copenhagen interpretation (or relative quantum mechanics if you ... 3answers 772 views ### What is the velocity area method for estimating the flow of water? Can anyone explain to me what the Velocity Area method for measuring river or water flow is? My guess is that the product of the cross sectional area and the velocity of water flowing in a pipe is ... 3answers 112 views ### Precision of Coulomb's law Up to which precision has the coulomb law proven to be true? I.e. if you have two electrons in a vacuum chamber, 5 meters appart, have the third order terms been ruled out? Are there any theoretical ... 1answer 293 views ### Measuring the magnitude of the magnetic field of a single electron due to its spin Is it possible to measure the magnitude of the magnetic field of a single electron due to its spin? The electron's intrinsic magnetic field is not dependent upon the amount of energy it has does it? ... 0answers 103 views ### Why is the Planck length the shortest measurable length? [duplicate] I quote from the Wikipedia article on Planck length: According to the generalized uncertainty principle, the Planck length is in principle, within a factor of order unity, the shortest ... 1answer 82 views ### Basic question about probability and measurements Say I have a Galton box, i.e. a ball dropping on a row of solid bodies. Now I want to calculate the probability distribution of the movement of the ball based on the properties of the body (case A). ... 1answer 408 views ### How much water must flow trough canal to maintain a constant water deep? In order to maintain a constant water deep in canal, how much water must flow trought the pipe ? As shown on picture, canal have a rectangular shape. I don't know if canal length have an influence. ... 1answer 224 views ### Is $\sigma$ or $\sigma / \sqrt{N}$ is error of a measurement? I wonder whether $\sigma$ or $\sigma / \sqrt{N}$ is error of a measurement. When I measure, say $0, 1, -1, 1, -1$, I have a $\sigma = 1$. I just measure $0, 1, -1$, I also have $\sigma = 1$. But in ... 4answers 483 views ### How to describe a well defined “zero moment” in time Suppose you have to specify the moment in time when a given event occurred, a "zero time". The record must be accurate to the minute, and be obtainable even after thousands of years. All the measures ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363570213317871, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/47122-log-functions-calculator.html
# Thread: 1. ## Log Functions in a Calculator Hi, I just wanted to know how to graph f(x)=Log(sub2)x. I think it would be something similar to log2/logx but i do not recall. I'd appreciate if someone could clarify, thanks. 2. $\log_2{x} = \frac{\log{x}}{\log{2}}$ #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552369117736816, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/119937/question-about-the-hardy-littlewood-method-quite-basic/119971
## Question about the Hardy-Littlewood method (quite basic) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, I have a question about the Hardy-Littlewood method. Writing $R_s(n)$ for the number of ways to write $n$ as a sum of $s$ $k$-th powers and $f(\alpha )$ for the sum $\sum _{m=1}^Ne(\alpha m^k)$, we have $R_s(n)=\int _\mathfrak Uf(\alpha )^se(-\alpha n)d\alpha ,$ where $\mathfrak U$ is some unit interval. The aim is to work out an asymptotic expression for $R_s(n)$ by studying this integral. We split the domain of integration into the major and minor arcs: $R_s(n)=\int _\mathfrak Mf(\alpha )^se(-\alpha n)d\alpha +\int _\mathfrak mf(\alpha )^se(-\alpha n)d\alpha$ and aim to work out an asymptotic expressions for the integral over the major arcs whilst making sure the minor arc contributions are "not too big". Now, this rests on the fact that $f(\alpha )$ gets "big" near a rational number, and moreover gets "more big" the smaller the denominator of the rational number, so that the major arcs, being intervals around rationals with "small" denominators, give the biggest contribution. My problem essentially is, I think, that I don't understand why this should be true; why does $f$ get "big" near rationals, that is. In Vaughan's book, we approximate $f(\alpha )$ near these rationals through the function $v(\beta )$ (page 14). I think Lemma 2.7 contains what I don't understand. For example, does this lemma say that $f(\alpha )\sim \frac {S(q,a)}{q}v(\alpha -a/q)$, as $n\rightarrow \infty$? And how exactly does it influence our definition of the major arcs and our choice of parameter $v$ in the definition of the major arcs? I assume that $v$ is chosen small enough to gain a saving on the estimate $n^{s/k-1}$ in (2.13) on page 16, and that having $q\leq N^v,\alpha \in \mathfrak M(q,a)$ ensures we get an error no larger than $N^{2v}$ in the lemma, but I still feel I'm missing the big picture in the analysis somehow. Perhaps I don't have a specific problem as such but want to clarify the situation a little bit; perhaps also it is a problem in that I'm simply not quite settled in the fact that $f(\alpha )$ is large at rationals with small denominators. In any case, I'd appreciate any thoughts/clarifications/things to think about. Thanks very much. - I suggest try to working out explicitly the case where $k=1$ first, when the sum is easily computable and you can use the usual approximation to this sum of numbers over the unit circle. – Asaf Jan 26 at 16:44 Good suggestion. – Tomos Parry Jan 27 at 12:33 ## 1 Answer I'll split this into two questions: 1. Why can $f$ be large near rationals with small denominator? 2. Why is $f$ small away from these points (i.e. on the minor arcs)? For (1), I should first clarify that $f$ isn't always large at these rationals; it could well be zero. But it is large some of the time. Let's look at the example of `$f(a/8)$` for `$a \in \{1, 3, \dots, 7\}$`, when (say) $k=2$. We have: `\[f(a/8) = \sum_{n=1}^N e(a\, n^2 / 8) \]` Since $e(an/8)$ only detects the residue of $n \bmod{8}$, we have: `\[f(a/8) = \frac{N}{8} \sum_{n \in \mathbb{Z} / 8 \mathbb{Z}} e(a\, n^2 / 8) + O(1) \]` But $n^2$ is congruent to $0 \bmod{8}$ a quarter of the time, $4 \bmod{8}$ a quarter of the time and $1 \bmod{8}$ half the time, so: `\[ f(a/8) = N \left(\frac{1}{4} + \frac{1}{4} e(a/2) + \frac{1}{2} e(a/8) \right) + O(1) = \frac{N}{2} e(a/8) + O(1) \]` (recalling $a$ was odd). That's going to be pretty large (absolute value about $N/2$). The reason is essentially that $f(a/8)$ is detecting a huge bias in the squares, namely that many more than average are $1 \bmod 8$. The same's going to happen for other denominators: only about half of all residues modulo $q$ get to be squares (for $q$ a prime power), and this bias causes large values of $f(a/q)$. Another way of saying exactly the same is that $(n^2 / 8) \pmod{1}$ has a very non-uniform distribution, with a lot of the mass bunched at $1/8$. This brings us to (2). The question is: might the squares have similar bias "`$(\bmod{\sqrt{2}})$`"? I have no idea where such a bias might plausibly come from, and fortunately in any case the answer is no. That is, if you were to draw the set: ```\[ \left\{ n^2 / \sqrt{2} \pmod{1}\,:\, n \in \{1 \dots N\} \right\} \]``` as a subset of $[0, 1)$, you'd find they were smeared out all over the place. More formally, the proportion of these points contained in any interval $[a, b]$ is roughly $(b-a)$, and we say the set is equidistributed modulo 1. As $N$ becomes large, this equidistribution becomes quantitatively better. Now, by Weyl's Equidistribution Criterion (many good references on the web), my previous statement is essentially equivalent to $f(a / b \sqrt{2})$ being small, whenever $a, b \in \mathbb{Z}$ are smallish. (Intuitively, you get a lot of cancellation in the definition of $f$.) In practice, it's usual to do the converse to this, i.e. deduce equidistribution from bounds on $f(a / b \sqrt{2})$; but I think equidistribution shows more intuitively what is going on in the minor arc bounds. How do you prove it? Well, there's some work to be done at some point. Proving Weyl's Inequality gives quite a general bound of this form. Another (essentially equivalent) statement is Van der Corput's lemma (see e.g. the first half of Terry Tao's post on this): If $a_n$ is a real sequence such that $(a_{n+h} - a_n)$ equidistributes $\bmod{1}$ for every non-zero $h \in \mathbb{Z}$, then $a_n$ equidistributes $\bmod{1}$. In our case, $((n + h)^2 - n^2) / \sqrt{2} = (2 h n + h^2) / \sqrt{2}$ is reasonably easily shown to equidistribute $\bmod{1}$, for fixed $h$. Hope some of that helps. - That explains a lot; it tells me that it boils down to the distribution of the $k$-th powers modulo $q$. Thanks for your help. – Tomos Parry Jan 27 at 12:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495634436607361, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2011/03/23/sheaves-of-functions-on-manifolds/?like=1&source=post_flair&_wpnonce=671450efa2
# The Unapologetic Mathematician ## Sheaves of Functions on Manifolds Now that we’ve talked a bunch about presheaves and sheaves in general, let’s talk about some particular sheaves of use in differential topology. Given a smooth manifold — for whatever we choose smooth to mean — we can define sheaves of real algebras of real-valued functions for every less-stringent definition of smoothness. In the first case of a bare topological manifold $M$, we have no real sense of differentiability at all, and so it only makes sense to talk about continuous real-valued functions $f:U\to\mathbb{R}$. Given an open set $U\subseteq M$ we let $\mathcal{O}^0_M(U)$ be the $\mathbb{R}$-algebra of real-valued functions that are defined and continuous on $U$. Next, if $M$ is a $C^1$ manifold, then it not only makes sense to talk about continuous real-valued functions — we can define $\mathcal{O}^0_M$ just as above — but we can also talk about differentiable real-valued functions. Given an open set $U\subseteq M$, we let $\mathcal{O}^1_M(U)$ be the $\mathbb{R}$-algebra of continuously-differentiable real valued functions $f:U\to\mathbb{R}$. As we increase the smoothness of $M$, we can consider smoother and smoother functions. If $M$ is a $C^k$ manifold, we can define $\mathcal{O}^j_M$ — the sheaf of $j$-times continuously-differentiable functions. Given an open set $U$, we let $\mathcal{O}^j_M(U)$ be the $\mathbb{R}$-algebra of real-valued functions on $U$ with $j$ continuous derivatives. Continuing up the latter, if $M$ is a $C^\infty$ manifold, then we can define all of the above sheaves, along with the sheaf $\mathcal{O}^\infty_M$ of infinitely-differentiable functions. And if $M$ is analytic, we can also define the sheaf of analytic functions. In each case, I’m not going to bother going through the proof that we actually do get sheaves. The core idea is that continuity, differentiability, and analyticity are notions defined locally, point-by-point. Thus if we restrict the domain of such a function we get another function of the same kind, and pasting together functions that agree on their overlaps preserves smoothness. This doesn’t hold, however, for global notions like boundedness — it’s easy to define a collection of functions on an open cover of $\mathbb{R}$, each of which is bounded, which define an unbounded function when pasted together. For each class of manifolds, the sheaf of the smoothest functions we can define has a special place. If $M$ is in class $C^k$ — where $k$ can be $0$, any finite whole number, $\infty$, or $\omega$ — then the sheaf $\mathcal{O}^k_M$ is often just written $\mathcal{O}_M$, and is called the “structure sheaf” of $M$. It turns out that most, if not all, of the geometrical properties of $M$ are actually bound up within its structure sheaf, and so this is a very important object of study indeed. ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 2 Comments » 1. [...] of Functions Let’s take the structure sheaves we defined last time and consider the stalks at a point . It turns out that since we’re [...] Pingback by | March 24, 2011 | Reply 2. [...] we take a manifold with structure sheaf . We pick some point and get the stalk of germs of functions at . This is a real algebra, and we [...] Pingback by | March 29, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9147095680236816, "perplexity_flag": "head"}
http://diracseashore.wordpress.com/category/physics/thermodynamics/
# Shores of the Dirac Sea Feeds: Posts Comments ## Unstable Universes Posted in gravity, high energy physics, Physics, thermodynamics on February 19, 2013 | 7 Comments » It’s a fine day for the Universe to die, and to be new again! Well, maybe not, but the Internet is abuzz with a reincarnation of the unstable universe story. (You can also see it here, or here, the whole thing is trending in Google). In other works, this is known as tunneling between vacua. And if you have followed the news about the Landscape of vacua in string theory, this should be old news (that we may live in a unstable Universe, which we don’t know). For some reason, this wheel gets reinvented again and again with different names. All you need is one paper, or conference, or talk to make it sound exciting, and then it’s “Coming attraction: the end of the Universe …. a couple of billion years in the future“. The basic idea is very similar to superheated water, and the formation of water bubbles in the hot water. What you have to imagine is that you are in a situation where you have  a first order phase transition between two phases. Call them phase A and B for lack of a better word (superheated water and water vapor), and you have to assume that the energy density in phase A is larger than the energy density in phase B, and that you happened to get a big chunk of material in the phase A. This can be done in some microwave ovens and you can have  water explosions if you don’t watch out. Now let us assume that someone happened to nucleate a small (spherical) bubble of phase B inside phase A, and that you want to estimate the energy of the new configuration. You can make the approximation that the wall separating the two phases is thin for simplicity, and that there is an associated wall (surface) tension $\sigma$ to account for any energy that you need to use to transition between the phases. The energy difference (or difference between free energies) of the configuration with the bubble and the one without the bubble is $\Delta E_{tot} = (\rho_B-\rho_A) V +\sigma \Sigma$ Where $\rho_{A,B}$ are the energy densities of phase A, phase B, $V$ is the volume of region B, and $\Sigma$ is the surface area between the two phases. If $\Delta E_{tot}>0$, then the surface term has more energy stored in it than the volume term. In the limit where we shrink the bubble to zero size, we get no energy difference. For big volumes, the volume term wins over the area, and we get a net lowering of the energy, so the system would not have enough energy in it to restore the region filled with phase B with phase A. In between there is a Goldilocks bubble that has the exact same energy of the initial configuration. So if we look carefully, there is an energy barrier between being able to nucleate a large enough Goldilocks bubble so that there is no net change in energy from a situation with no bubble. If the bubbles are too small, they tend to shrink, and if the bubbles are big they start to grow even bigger. There are two standard ways to get past such an energy barrier. In the first way, we use thermal fluctuations. In the second one (the more fun one, since it can happen even at zero temperature), we use quantum tunneling to get from no bubble, to bubble. Once we have the bubble it expands. Now, you might ask, what does this have to do with the Universe dying? Well, imagine the whole Universe is filled with phase A, but there is a phase B lurking around with less energy density. If a bubble of phase B happens to nucleate, then such a bubble will expand (usually it will accelerate very quickly to reach the maximum speed in the universe: the speed if light) and get bigger as time goes by eating everything in its way (including us). The Universe filled with phase A gets eaten up by a universe with phase B. We call that the end of the Universe A. You need to add a little bit more information to make this story somewhat consistent with (classical) gravity, but not too much. This was done by Coleman and De Luccia, way back in 1987. You can find some information about this history here. Incidentally, this has been used to describe how inflating universes might be nucleated from nothing, and people who study the Landscape of string vacua have been trying to understand how this tunneling between vacua might seed the Universe we see in some form or another from a process where these tunneling events explore all possibilities. You can reincarnate that into Today’s version of “The end is near, but not too near”. We know the end is not too near, because if it was, it would have already happened. I’m going to skip this statistical estimate: all  you have to understand is that the expected time that it would take to statistically nucleate that bubble somewhere has to be at least the age of the currently known universe (give or take). I think the only reason this got any traction was because the Higgs potential in just the Standard model, with no dark matter,  with no nothing more in all its possible incarnations is involved in it somehow. Next week: see baby Universe being born! Isn’t it cute? That’s the last thing you’ll ever see: Now you die! Fine print: Ab initio calculations of the “vacuum energies”  and “tunneling rates” between various phases are not model independent. It could be that the age of the current Universe is in the trillions or quadrillions of years if a few details are changed. And all of these details depend on the physics at energy scales much larger than the standard model, the precise details of which we don’t know much at all. The main reason these numbers can change so much is because a tunneling rate is calculated by taking the exponential of a negative number. Order one changes in the quantity we exponentiate lead to huge changes in estimates for lifetimes. Read Full Post » ## Bad science reporting versus good science reporting Posted in thermodynamics, tagged science reporting, thermodynamics on January 4, 2013 | 3 Comments » Today I was greeted with the following line Quantum Gas Temperature Drops Below Absolute Zero This is the way the news about a quantum system that is effectively at negative temperature was reported in Wired. The thing is, negative temperatures are hotter than any finite positive temperature. One can also check this fact in the Wikipedia entry for negative temperature. The simplest system that has an effective negative temperature is a laser: to get a negative temperature one just needs what is called a population inversion. In that report it was stated that Previously absolute zero was considered to be the theoretical lower limit of temperature as temperature correlates with the average amount of energy of the substance’s particles. The crucial mistake is the expression “Previously absolute zero was considered” which suggests that we have overturned theoretical physics knowledge on its head, because a new and revolutionary temperature below zero has been obtained! Moreover, we have measured systems with negative temperatures at least since the invention of the laser, and actually since the invention of the maser ( a laser in the microwave region). The news is actually  well reported in Ars technica. In that article they actually explain facts correctly about how to think about negative temperatures. Read Full Post » ## Gravity does not exist? Posted in Academia, Quantum Gravity, science and society, thermodynamics on July 13, 2010 | 31 Comments » I happened onto an article on the New York Times abut Erik Verlinde’s take on gravity as an Entropic force. The article was written by Dennis Overbye who most of the time does  a good job of covering high energy physics.  Erik’s work dates from earlier this year and can be found here. To tell the truth, I don’t understand what he’s trying to say in that paper and to me it feels like it’s almost certainly wrong. However, I don’t want to discuss that paper. What I want to discuss is the following  provocative quote “We’ve known for a long time gravity doesn’t exist,” Dr. Verlinde said, “It’s time to yell it.” I don’t believe this is taken out of context,  so we should take it at face value. The statement is obviously wrong, so it sounds like ultra-post-modern pap and makes all physicists working on the subject of quantum gravity look like crazy mad men. I’m sure this sells newspapers, but that is not the point. When asked for a sound byte can’t people at least say something that is correct and not just provocative? The proper way to write that statement is that “Gravity is not really a fundamental force “, which is more correct and does not deny gravity its proper place as something that has been observed in nature, however it is less catchy. If we apply the same criteria as used in the above construction, all of the following statements are also correct: • Hydrodynamics does not exist (it only happens for collections of atoms, but not for individual ones) • Space and time do not exist (often used when talking about quantum gravity being emergent from somewhere else) • All emergent phenomena do not exist (they are not fundamental after all). • I do not exist (I’m an emergent phenomenon). Reminds me of discussions I have read before at Backreaction, here and see also  here in the Discover magazine about time not existing. You should also read the following from Asymptotia: But is it real? and also a discussion on What is fundamental, Anyway? Read Full Post » ## Black holes as frozen stars Posted in gravity, high energy physics, quantum fields, Quantum Gravity, relativity, string theory, thermodynamics on February 19, 2009 | 13 Comments » We now have a few working examples of a microscopic theory of quantum gravity, all come with specific boundary conditions (like any other equation in physics or mathematics), but otherwise full background independence. In particular, all those theories include quantum black holes, and we can ask all kinds of puzzling questions about those fascinating objects. Starting with, what is exactly a black hole? (more…) Read Full Post » ## Maldacena’s information paradox Posted in high energy physics, Quantum Gravity, string theory, thermodynamics on January 15, 2009 | 55 Comments » Back in 2001, in a truly beautiful paper, Juan Maldacena formulated a version of Hawking’s information paradox, which has the added advantage that it could be discussed and analyzed in the context of a complete background independent theory of quantum gravity, namely that of the AdS/CFT correspondence. This variant is similar to the original paradox, formulated for black holes surrounded by flat space, in that it displays a sharp conflict between properties of black holes in classical General Relativity, and basic postulates of quantum mechanics. Alas, it is also different in many crucial ways from the original paradox. Despite that, Juan’s proposed resolution to his paradox seems to have led to Hawking’s arguments, who managed to convince himself (though I think it is fair to say not too many others, unless they were already convinced) that information is not lost after all in the process of black hole formation and evaporation. (more…) Read Full Post » ## Everyday physics: pressure cookers. Posted in Physics, recipes, thermodynamics, tagged hummus, Physics, pressure cookers, recipe, thermodynamics on September 17, 2008 | 12 Comments » Most graduate students out there can at times feel a lot pressure to perform. Heck, that is not just graduate students. Pressure at work seems to be one of those things that most people can relate to and feel all the time. It gives rise to a lot of stress. I don’t have any good recipe for combating stress that is guaranteed, but cooking sometimes helps. Seeing as what I just wrote is not too funny, maybe I should get to the point and talk about Pressure cookers and as a bonus you get a cooking recipe (no peeking). When in doubt, pick a gadget and explain it, yes sireee. Besides, the explanations that I found on the internet about how pressure cookers work teeter on the edge of crackpottery. Not all are bad, but most don’t really say anything other than water boils at higher temperature at higher pressure which is correct, but unsatisfying. It’s like saying it just works that way. My 3d rendition of a pressure cooker (more…) Read Full Post »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464794397354126, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/251/design-properties-of-the-rijndael-finite-field/254
# Design properties of the Rijndael finite field So we've already had a question on replacing the Rijndael S-Box. My question is - can we use a different finite field other than the one given by $x^8 + x^4 + x^3 + x + 1$ in $GF(2^8)$. In other words, would any irreducible polynomial over this field do the trick or are the special considerations and properties of that particular reducing polynomial? If a more general discussion on appropriate fields for cryptographic operations, such as those used in ECC, is necessary, then I'd be happy to hear it. What I'm trying to get to is an understanding of whether there are any important properties in terms of the "randomness" of the operation - for example, are there weak fields to operate over? For example, $x^8 + x^3 + x + 1$ ought to also be irreducible - so would it be suitable? I ask partly because of the Galois/Counter Mode issue with short tags which lead me to think not all fields might hold equal strength. Right? Wrong? Inconsequential? - ## 2 Answers For security of ECC, the choice of the irreducible polynomial is unimportant, because all finite fields with the same cardinal are isomorphic to each other (and the isomorphisms are easy to compute); this is why we can say "the finite field $GF(2^{163})$" even though there are many irreducible polynomials of degree $163$ over $GF(2)$. So the algebraic structure of an elliptic curve is not impacted by the actual field choice (of course, the curve is described with an equation $Y^2+XY=X^3+aX^2+b$ for two given constants $a$ and $b$, and those constants are represented with a given choice for the field; but using another field and applying the isomorphism on $a$ and $b$ yields another curve with the same structure). In practice, we use polynomials which promote implementation efficiency, i.e. polynomials with very low Hamming weight ($3$ if possible, $5$ otherwise), and where the non-zero coefficients are of the smallest possible degree (to make post-multiplication modular reduction easier). For the Rijndael S-Box, things are not as simple. In the original Rijndael specification, there are some explanations on the design rationale, which were not copied into the final FIPS 197 standard. The S-Box is treated page 26; the use of an inverse in $GF(2^8)$ comes from a 1994 article by Nyberg, as providing good defense against differential cryptanalysis. To this inverse, Daemen and Rijmen added the so-called "affine mapping" which is meant to mask the algebraic structure of the said multiplicative inverse. All details are not given, so presumably they tried many polynomials (there are 30 irreducible polynomials of degree 8 over $GF(2)$, 16 of which being primitive) and many possible affine mappings, "measured" resistance to differential and linear cryptanalysis, and kept the best candidate. Daemen and Rijmen published a book on the design of Rijndael (I have not read it). - I believe the designers picked the specific polynomial because it was the first irreducible one from the book they cited that met their criteria. Thus, they picked the first one to remove the doubt that it was special. – Jeff Moser Aug 2 '11 at 21:23 I just looked in the book (The design of Rijndael), and the description of the SubBytes step (section 3.4.1, pages 34-37) doesn't contain much more information than what is in the specification linked (but said in more words). – Paŭlo Ebermann♦ Nov 28 '11 at 16:53 Actually, the choice of irreducible polynomial is unimportant in AES; for any polynomial representation of $GF(2^8)$, you can modify the affine tranformation (and the MixCollumn) operation to come up with a block cipher that is equivalent to AES (meaning that any break to that can be translated to a break on the original AES). The key observation here is that, for any two polynomial representations $A$ and $B$ of the same $GF(2^N)$ field, there is a bitwise linear isomorphism $L$ from elements in representation $A$ to elements in representation $B$ which preserves both the field addition and multiplication operations: $L( X +_A Y ) = L(X) +_B L(Y)$ $L( X \times_A Y) = L(X) \times_B L(Y)$ So, if $A$ is the standard AES polynomial representation ($x^8 + x^4 + x^3 + x +1$), and $B$ is your favorite alternative representation, you can implement AES using your representation, which each internal value being mapped by $L$ compared to the equivalent value in the standard AES operation. The affine operation then becomes: $Affine'(X) = L( Affine( L^{-1} ( X )))$ which is also affine (because L is linear). In addition, in the MixCollumn operation, instead of multiplying elements by 2 and 3 (in the $A$ representation), you multiply them by $L(2)$ and $L(3)$ (in the $B$ representation). The result of all this is an alternative cipher that is equivalent to standard AES where all inputs (plaintext, keys) are transformed by a linear operation, and whose output (ciphertext) is also transformed by a linear operation. This linear operation can be publicly computed, and hence cannot impact security. On another note, you state that $x^8 + x^3 + x + 1$ ought to be irreducible; well, as $x^8 + x^3 + x + 1 = (x+1) \times (x^7 + x^6 + x^5 + x^4 + x^3 + 1)$, that is not the case. - 1 +1 for the last paragraph alone! A polynomial with an even number of terms is never irreducible over GF(2). – Jyrki Lahtonen Jan 22 '12 at 8:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393929839134216, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2009/10/12/transforming-differential-operators/?like=1&source=post_flair&_wpnonce=f636fc8316
# The Unapologetic Mathematician ## Transforming Differential Operators Because of the chain rule and Cauchy’s invariant rule, we know that we can transform differentials along with functions. For example, if we write $\displaystyle\begin{aligned}x&=r\cos(\theta)\\y&=r\sin(\theta)\end{aligned}$ we can write the differentials of $x$ and $y$ in terms of the differentials of $r$ and $\theta$: $\displaystyle\begin{aligned}dx&=\cos(\theta)dr-r\sin(\theta)d\theta\\dy&=\sin(\theta)dr+r\cos(\theta)d\theta\end{aligned}$ It turns out that the chain rule also tells us how to rewrite differential operators in terms of the variables. But these go in the other direction. That is, we can write the differential operators $\frac{\partial}{\partial r}$ and $\frac{\partial}{\partial\theta}$ in terms of the operators $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$. First of all, let’s write down the differential of $f$ in terms of $x$ and $y$ and in terms of $r$ and $\theta$: $\displaystyle\begin{aligned}df&=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy\\df&=\frac{\partial f}{\partial r}dr+\frac{\partial f}{\partial\theta}d\theta\end{aligned}$ and now we can rewrite $dx$ and $dy$ in terms of $dr$ and $d\theta$. $\displaystyle\begin{aligned}df&=\frac{\partial f}{\partial x}\left(\cos(\theta)dr-r\sin(\theta)d\theta\right)+\frac{\partial f}{\partial y}\left(\sin(\theta)dr+r\cos(\theta)d\theta\right)\\&=\frac{\partial f}{\partial x}\cos(\theta)dr-\frac{\partial f}{\partial x}r\sin(\theta)d\theta+\frac{\partial f}{\partial y}\sin(\theta)dr+\frac{\partial f}{\partial y}r\cos(\theta)d\theta\\&=\left(\cos(\theta)\frac{\partial f}{\partial x}+\sin(\theta)\frac{\partial f}{\partial y}\right)dr+\left(-r\sin(\theta)\frac{\partial f}{\partial x}+r\cos(\theta)\frac{\partial f}{\partial y}\right)d\theta\end{aligned}$ Now by uniqueness we can read off the partial derivatives of $f$ in terms of $r$ and $\theta$: $\displaystyle\begin{aligned}\frac{\partial f}{\partial r}&=\cos(\theta)\frac{\partial f}{\partial x}+\sin(\theta)\frac{\partial f}{\partial y}\\\frac{\partial f}{\partial\theta}&=-r\sin(\theta)\frac{\partial f}{\partial x}+r\cos(\theta)\frac{\partial f}{\partial y}\end{aligned}$ Finally, we pull all mention of $f$ out of our notation and just write out the differential operators. $\displaystyle\begin{aligned}\frac{\partial}{\partial r}&=\cos(\theta)\frac{\partial}{\partial x}+\sin(\theta)\frac{\partial}{\partial y}\\\frac{\partial}{\partial\theta}&=-r\sin(\theta)\frac{\partial}{\partial x}+r\cos(\theta)\frac{\partial}{\partial y}\end{aligned}$ Now we’re done rewriting, but for good form we should express these coefficients in terms of $x$ and $y$. $\displaystyle\begin{aligned}\frac{\partial}{\partial r}&=\frac{x}{\sqrt{x^2+y^2}}\frac{\partial}{\partial x}+\frac{y}{\sqrt{x^2+y^2}}\frac{\partial}{\partial y}\\\frac{\partial}{\partial\theta}&=-y\frac{\partial}{\partial x}+x\frac{\partial}{\partial y}\end{aligned}$ It’s important to note that there’s really no difference between these last two steps. The first one uses the variables $r$ and $\theta$ while the second uses the variables $x$ and $y$, but they express the exact same functions, given the original substitutions above. More generally, let’s say we have a vector-valued function $g:\mathbb{R}^m\rightarrow\mathbb{R}^n$ defining a substitution $\displaystyle\begin{aligned}y^1&=g^1(x^1,\dots,x^m)\\&\vdots\\y^n&=g^n(x^1,\dots,x^m)\end{aligned}$ Cauchy’s invariant rule tells us that this gives rise to a substitution for differentials. $\displaystyle\begin{aligned}dy^1=dg^1(x^1,\dots,x^m)&=\frac{\partial g^1}{\partial x^1}dx^1+\dots+\frac{\partial g^1}{\partial x^m}dx^m=\frac{\partial g^1}{\partial x^i}dx^i\\&\vdots\\dy^n=dg^n(x^1,\dots,x^m)&=\frac{\partial g^n}{\partial x^1}dx^1+\dots+\frac{\partial g^n}{\partial x^m}dx^m=\frac{\partial g^n}{\partial x^i}dx^i\end{aligned}$ We can play it a little loose and write this out in matrix notation: $\displaystyle\begin{pmatrix}dy^1\\\vdots\\dy^n\end{pmatrix}=\begin{pmatrix}\frac{\partial g^1}{\partial x^1}&\dots&\frac{\partial g^1}{\partial x^m}\\\vdots&\ddots&\vdots\\\frac{\partial g^n}{\partial x^1}&\dots&\frac{\partial g^n}{\partial x^m}\end{pmatrix}\begin{pmatrix}dx^1\\\vdots\\dx^m\end{pmatrix}$ Now if we have a function $f$ in terms of the $y$ variables, we can use the substitution above to write it as a function of the $x$ variables. We can write the differential of $f$ in terms of each $\displaystyle\begin{aligned}df&=\frac{\partial f}{\partial y^j}dy^j\\df&=\frac{\partial f}{\partial x^i}dx^i\end{aligned}$ Next we use the substitutions of the differentials to rewrite the first form as $\displaystyle df=\frac{\partial f}{\partial y^j}\frac{\partial g^j}{\partial x^i}dx^i$ Then uniqueness allows us to match up the coefficients and write out the partial derivatives in terms of the $x$ variables $\displaystyle\frac{\partial f}{\partial x^i}=\frac{\partial g^j}{\partial x^i}\frac{\partial f}{\partial y^j}$ It is in this form that the chain rule is most often introduced, or the similar form $\displaystyle\frac{\partial f}{\partial x^i}=\frac{\partial y^j}{\partial x^i}\frac{\partial f}{\partial y^j}$ And now we can remove mention of $f$ from the formulæ and speak directly in terms of the operators $\displaystyle\frac{\partial}{\partial x^i}=\frac{\partial y^j}{\partial x^i}\frac{\partial}{\partial y^j}$ Again, we can play it a little loose and write this in matrix notation $\displaystyle\begin{pmatrix}\frac{\partial}{\partial x^1}\\\vdots\\\frac{\partial}{\partial x^m}\end{pmatrix}=\begin{pmatrix}\frac{\partial y^1}{\partial x^1}&\dots&\frac{\partial y^n}{\partial x^1}\\\vdots&\ddots&\vdots\\\frac{\partial y^1}{\partial x^m}&\dots&\frac{\partial y^n}{\partial x^m}\end{pmatrix}\begin{pmatrix}\frac{\partial}{\partial y^1}\\\vdots\\\frac{\partial}{\partial y^n}\end{pmatrix}$ This is very similar to the substitution for differentials written in matrix notation. The differences are that we transform from $y$-derivations to $x$-derivations instead of from $x$-differentials to $y$-differentials, and the two substitution matrices are the transposes of each other. Those who have been following closely (or who have some background in differential geometry) should start to see the importance of this latter fact, but for now we’ll consider this a statement about formulas and methods of calculation. We’ll come to the deeper geometric meaning when we come through again in a wider context. ### Like this: Posted by John Armstrong | Analysis, Calculus ## 4 Comments » 1. In the book version of this blog, the co-author would then fill in the History of Mathematics in Differential Operators, covering Heaviside, Hilbert, von Neumann, and a huge cast of clever Mathematicians. Comment by | October 13, 2009 | Reply 2. The book I’m thinking of wouldn’t come near this stuff. Comment by | October 13, 2009 | Reply 3. [...] We can also invert the transformation and rewrite differential operators: [...] Pingback by | October 16, 2009 | Reply 4. [...] (along with the induced transformation ) is a continuously differentiable function on with and . Notice that could extend out beyond , [...] Pingback by | January 5, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 54, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9173496961593628, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/180169-conjugacy-classes.html
# Thread: 1. ## Conjugacy Classes Just a little confused about conjugacy class equations. Example: In the symmetric group $S_5$ how many elements are there in the conjugacy class of the element $(12)(34)$? I get that this is the amount of different possible ways to have a possible combination of 2 2 1 in $S_5$, and I know the answer's 15, I'm just not sure how to calculate this. Thanks in advance 2. I've found how this is calculated, but just not sure where they actually get these formulas from: Symmetric group:S5 - Groupprops 3. Solved it. If anyone's reading this later on the following video explains it nicely! 4. Originally Posted by craig I've found how this is calculated, but just not sure where they actually get these formulas from: Symmetric group:S5 - Groupprops The link you gave doesn't seem to work at the moment... Anyway, for permutations of the form (ab) you have 5 choices for a and 4 for b (as (aa) is not a permutation). However, (ab)=(ba) so you have to account for this, so you have 5.4/2=10 choices. Similarly, for permutations of the form (123) you have 5.4.3/3=20 choices, as (abc)=(bca)=(cab), and so it goes on for the other basic cycles. Now, for (abc)(de) things start to get a bit more interesting. You have 5.4.3.2.1/3.2=20 choices, calculated the same as above. Adding all these up (and remembering your identity element) you should get 10+20+30+24+20+1=105. That means you have 15 till to find. Therefore, the conjugacy class of permutations of the form (ab)(cd) has 15 elements. This is proven properly in the following way: (ab)(cd) has 5.4.3.2 basic choices, as mentioned earlier. However, you need to divide through by 2, because (ab)(cd)=(ba)(cd), and again by two because (ab)(cd)=(ab)(dc). Finally, notice that (ab)(cd)=(cd)(ab). So you need to divide through by 2 again. Thus, you have 5.4.3.2/2.2.2=5.3=15, as required. Does that make sense? 5. Thanks for the reply! Yeh that makes sense, it was the final dividing through by 2 that threw me! Oh and the link seems to work fine for me? Cheers again for help! 6. Originally Posted by craig Thanks for the reply! Yeh that makes sense, it was the final dividing through by 2 that threw me! Oh and the link seems to work fine for me? Cheers again for help! Yeah-it works fine for me too now! Don't know what was up - it took me to some page apologising for something and promising to right it asap. I didn't read it in detail though... 7. the way i've always counted them is like this: by insisting that i < j, we can list the traspositions in an orderly fashion, like so: (1 2), (1 3), (1 4), (1 5), (2 3), (2 4), (2 5), (3 4), (3 5), (4 5), making it clear there are 4+3+2+1 = 10 transpositions. now suppose we want to count how many (a b)(c d) there are, with (a b)(c d) disjoint. clearly, we have 10 "free" choices for (a b). the next choice must be a transposition that has neither a nor b in it, there are 4 transpositions with "a" in it, and 4 transpostions with "b" in it. but of the transpositions with "a" in it, one of those is (a b), so we must exclude only 4+3 = 7 of the 10 transpositions to choose from. that gives us exactly 3 transpositions left that have neither a nor b in it. for example, if a = 1, b = 2, the 3 transpositions are (3 4), (3 5) and (4 5). that gives 30 possible ways to create a disjoint pair of 2-cycles (a b)(c d). since we may have also chosen (c d)(a b) this way, we must divide by 2, leaving 15 possible disjoint pairs of 2-cycles. 8. Originally Posted by craig Just a little confused about conjugacy class equations. Example: In the symmetric group $S_5$ how many elements are there in the conjugacy class of the element $(12)(34)$? I get that this is the amount of different possible ways to have a possible combination of 2 2 1 in $S_5$, and I know the answer's 15, I'm just not sure how to calculate this. Thanks in advance Barring the video you mentioned it doesn't seem like anyone has mentioned this. On my blog here I prove that then number of elements in the conjugacy class associated to the cycle type $1^{m_1}\cdots n^{m_n}$ in $S_n$ is $\displaystyle n!\prod_{j=1}^{n}\frac{1}{(m_j)!j^{m_j}}$ so that in your case when you've got the cycle type $1^{1}2^{2}3^04^05^0$ the number of elements of that conjugacy class is $\displaystyle 5!\cdot\frac{1}{1!1^1 (2!)2^2}=\frac{120}{8}=15$ Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525849223136902, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/1150/what-are-some-useful-approximations-to-the-black-scholes-formula?answertab=votes
# What are some useful approximations to the Black-Scholes formula? Let the Black-Scholes formula be defined as the function $f(S, X, T, r, v)$. I'm curious about functions that are computationally simpler than the Black-Scholes that yields results that approximate $f$ for a given set of inputs $S, X, T, r, v$. I understand that "computationally simpler" is not well-defined. But I mean simpler in terms of number of terms used in the function. Or even more specifically, the number of distinct computational steps that needs to be completed to arrive at the Black-Scholes output. Obviously Black-Scholes is computationally simple as it is, but I'm ready to trade some accuracy for an even simpler function that would give results that approximate B&S. Does any such simpler approximations exist? - ## 6 Answers This is just to expand a bit on vonjd's answer. The approximate formula mentioned by vonjd is due to Brenner and Subrahmanyam ("A simple solution to compute the Implied Standard Deviation", Financial Analysts Journal (1988), pp. 80-83). I do not have a free link to the paper so let me just give a quick and dirty derivation here. For the at-the-money call option, we have $S=Ke^{r(T-t)}$. Plugging this into the standard Black-Scholes formula $$C(S,t)=N(d_1)S-N(d_2)Ke^{-r(T-t)},$$ we get that $$C(S,t)=\left[N\left(\frac{1}{2}\sigma\sqrt{T-t}\right)-N\left(-\frac{1}{2}\sigma\sqrt{T-t}\right)\right]S.\qquad\qquad(1)$$ Now, Taylor's formula implies for small $x$ that $$N(x)=N(0)+N'(0)x+N''(0)\frac{x^2}{2}+O(x^3).\qquad\qquad\qquad\qquad(2)$$ Combining (1) and (2), we will get with some obvious cancellations that $$C(S,t)=S\left(N'(0)\sigma\sqrt{T-t}+O(\sigma^3\sqrt{(T-t)^3})\right).$$ But $$N'(0)=\frac{1}{\sqrt{2\pi}}=0.39894228...$$ so finally we have, for small $\sigma\sqrt{T-t}$, that $$C(S,t)\approx 0.4S\sigma\sqrt{T-t}.$$ The modified formula $$C(S,t)\approx 0.4Se^{-r(T-t)}\sigma\sqrt{T-t}$$ gives a slightly better approximation. - Good answer. I wonder if there are any approximations for options that are not at the money? My simple approach would be to assume a delta of 50% for the ATM call option and imply a price for the non ATM option as Option = ATM Price + 0.5*(Strike - Forward). Anyone got anything better? – Robert Feb 7 '12 at 12:08 This one is the best approximation I have ever seen: If you hate computers and computer languages don't give up it's still hope! What about taking Black-Scholes in your head instead? If the option is about at-the-money-forward and it is a short time to maturity then you can use the following approximation: call = put = StockPrice * 0.4 * volatility * Sqrt( Time ) Source: http://www.espenhaug.com/black_scholes.html - In addition to what vonjd already posted I would recommend you to look at the E.G. Haug's article - The Options Genius. Wilmott.com. You can find some aproximations of BS not only for vanilla european call and put but even for some exotics. For example: • chooser option: call = put = $0.4F_{0} e^{-\mu T}\sigma(\sqrt{T}-\sqrt{t})$ • asian option: call = put = $0.23F_{0} e^{-\mu T}\sigma(\sqrt{T}+2\sqrt{t})$ • floating strike lookback call = $0.8F_{0} e^{-\mu T}\sigma\sqrt{T} - 0.25{\sigma^2}T$ • floating strike lookback put = $0.8F_{0} e^{-\mu T}\sigma\sqrt{T} + 0.25{\sigma^2}T$ • forward spread: call = put = $0.4F_{1} e^{-\mu T}\sigma\sqrt{T}$ - The Black-Scholes 'normal-vol' formula leads quickly to a similar approximation to the one described by olaker. Click here for a paper which contains a formal derivation of the call and put prices based on a normal model (ie a brownian motion rather than a geometric brownian motion). The formula for the call price is: $$\text{Call} = (F-K)N(d_1) + \frac{\sigma \sqrt{T-t}}{\sqrt{2 \pi}} e^{-\frac{1}{2} d_1^2},$$ where $$d_1 = \frac{F-K}{\sigma\sqrt{T-t}}.$$ BTW, I work in fixed income, so I always tend to write a version that is suitable for swaptions. ## Options which are ATM You can see that for $F=K$ this becomes $\text{ATMCall} = \frac{\sigma \sqrt{T-t}}{\sqrt{2 \pi}} \approx 0.4 \, \sigma \, \sqrt{T-t}$. ## Options which are not ATM I have recently discovered a generalization of this formula which works very well for strikes which are not at the money too. See my blog for a longer discussion, here are the main points. The standard decomposition for an option is: $$\text{Option value} = \text{Intrinsic value} + \text{Time value}.$$ In the Black-Scholes normal formula above, if you investigate the term $(F-K)N(d_1)$ in a spreadsheet, you’ll see that for small levels of volatility and maturity (try, for example, $\sigma=0.0025$, Maturity=1) it is actually quite close to $\max(0,F-K)$ – which is the intrinsic value of the call. Consequently, the BS normal formula is almost: $$\text{Call Price} = \text{Intrinsic Value} + \text{ATMPrice} \times e^{-\frac{1}{2} d_1^2},$$ where $$\text{ATMPrice} = \frac{\sigma \sqrt{T-t}}{\sqrt{2 \pi}}.$$ However, if you compare this approximation to the true BS formula in a spreadsheet, you’ll see that around the strike (especially for larger values of $\sigma$) it gives too much value to the call: basically the term $\text{ATMPrice} \times e^{-\frac{1}{2}d_1^2}$, is too big when $d_1$ is non-zero and small. This is telling us that the difference between $(F-K)N(d_1)$ and $\max(0,F-K)$ gets important near the strike. As I say, have a look in a spreadsheet. Nonetheless, this simple-but-wrong formula for the Call Price points us in the right direction: it shows that the time value of the option should be written in terms of the price of the ATM option. Here is my solution. I call it The Hardy Decomposition: $$\text{Call Price} = \text{Intrinsic Value} + \text{ATMPrice}\times\text{HardyFactor}$$ where $$\text{HardyFactor} = e^{-\frac{1}{2} d_1^2} + \frac{d_1}{0.4}N(d_1) – \max(\frac{d_1}{0.4},0).$$ So far this is just a rearrangement of the original Black-Scholes 'normal-vol' formula. The key result is that the $\text{HardyFactor}$ is well approximated by a simple expression: $$\text{HardyFactor} \approx (1- 0.41 |d_1|) e^{-|d_1|}$$ So you can actually use the following as a pretty-good approximation for call prices: $$\text{Call Price} = \text{Intrinsic Value} + \text{ATMPrice}\times (1- 0.41 |d_1|) e^{-|d_1|}.$$ A similar result holds for put options. You can use this Hardy Decomposition to calculate option prices in your head - you only need to remember a few values: - As other people have said, you need to approximate the cumulative. The problem is that wherever you look you will find that to approximate it you will need to use exponentials or trigonometric functions which are also very expensive. What you can do is to build yourself a cubic spline with pre-cached values for the cumulative and calculate the value at other points x by (cubic) interpolation. That will make it much faster. Possibly you will also call this method with a series of ordered values so that you can avoid doing the binary search to locate the interval. You will have a cached index for the last interval located and look around that one. - The only thing that you can approximate in one is the Cumulative Normal Distribution function (to save some time with the numerics). Anyway - it is as close to "one-liner" as possible - why would you need to simplify it ? - I'm aware of the fact that Black-Scholes is a simple "one-liner". I requested approximations to Black-Scholes that are even simpler. I'm ready to give up some accuracy in return for an even simpler function. – knorv May 10 '11 at 16:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9308105111122131, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/51994-regular-surface.html
# Thread: 1. ## regular surface Show that the cylinder { $(x,y,z)$ in $R^3; x^2+y^2=1$} is a regular surface, and find parametrizations whose coordinate neighborhoods cover it. Please help. 2. I won't do it for you but.. Parameterize it using cylindrical coordinates into some parameterization S. To show it is regular you can -Show the first partial derivative of S are independant. or -Show that the cross product of the first partial derivatives of S is non zero there are many other ways to show it is regular. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.901572585105896, "perplexity_flag": "middle"}
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_3&diff=30660&oldid=30630
# User:Michiexile/MATH198/Lecture 3 ### From HaskellWiki (Difference between revisions) Current revision (01:21, 8 October 2009) (edit) (undo) (8 intermediate revisions not shown.) Line 1: Line 1: - IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ. - These notes cover material dispersed in several places of Awodey. The definition of a functor is on page 8. More on functors and natural transformations comes in sections 7.1-7.2, 7.4-7.5, 7.7-7.10. These notes cover material dispersed in several places of Awodey. The definition of a functor is on page 8. More on functors and natural transformations comes in sections 7.1-7.2, 7.4-7.5, 7.7-7.10. Line 19: Line 17: * <math>F_1(gf) = F_1(g)F_1(f)</math> * <math>F_1(gf) = F_1(g)F_1(f)</math> - ''Note'': We shall consistently use <math>F</math> in place of <math>F_0</math> and <math>F_1</math>. The context should be able to tell you whether you are mapping an object or a morphism at any given moment. + ''Note'': We shall frequently use <math>F</math> in place of <math>F_0</math> and <math>F_1</math>. The context should suffice to tell you whether you are mapping an object or a morphism at any given moment. On Wikipedia: [http://en.wikipedia.org/wiki/Functor] On Wikipedia: [http://en.wikipedia.org/wiki/Functor] Line 40: Line 38: One example of particular interest to us is the category Hask. A functor in Hask is something that takes a type, and returns a new type. Not only that, we also require that it takes arrows and return new arrows. So let's pick all this apart for a minute or two. One example of particular interest to us is the category Hask. A functor in Hask is something that takes a type, and returns a new type. Not only that, we also require that it takes arrows and return new arrows. So let's pick all this apart for a minute or two. - Taking a type and returning a type means that you are really building a polymorphic type class: you have a family of types parametrized by some type variable. For each type <hask>a</hask>, the functor <math>data F a = ...</math> will produce a new type, <hask>F a</hask>. This, really, is all we need to reflect the action of <math>F_0</math>. + Taking a type and returning a type means that you are really building a polymorphic type class: you have a family of types parametrized by some type variable. For each type <hask>a</hask>, the functor <hask>data F a = ...</hask> will produce a new type, <hask>F a</hask>. This, really, is all we need to reflect the action of <math>F_0</math>. The rules we expect a Functor to obey seem obvious: translating from the categorical intuition we arrive at the rules The rules we expect a Functor to obey seem obvious: translating from the categorical intuition we arrive at the rules Line 99: Line 97: To see this, recall the category <math>Monoid</math>, where each object is a monoid, and each arrow is a monoid homomorphism. We can form a one-object category out of each monoid, and the method to do this is functorial - i.e. does the right thing to arrows to make the whole process a functor. To see this, recall the category <math>Monoid</math>, where each object is a monoid, and each arrow is a monoid homomorphism. We can form a one-object category out of each monoid, and the method to do this is functorial - i.e. does the right thing to arrows to make the whole process a functor. - Specifically, if <math>h:M\to N</math> is a monoid homomorphism, we create a new functor <math>C(h):C(M)\to C(N)</math> by setting <math>C(h)[0](*) = *</math> and <math>C(h)[1](m) = h(m)</math>. This creates a functor from <math>Monoid</math> to <math>Cat</math>. The domain can be further restricted to a full subcategory <math>OOC</math> of <math>Cat</math>, consisting of all the 1-object categories. We can also define a functor <math>U:OOC\to Monoid</math> by <math>U(C) = C[1]</math> with the monoidal structure on <math>U(C)</math> given by the composition in <math>C</math>. For an arrow <math>F:A\to B</math> we define <math>U(F) = F[1]</math>. + Specifically, if <math>h:M\to N</math> is a monoid homomorphism, we create a new functor <math>C(h):C(M)\to C(N)</math> by setting <math>C(h)_0(*) = *</math> and <math>C(h)_1(m) = h(m)</math>. This creates a functor from <math>Monoid</math> to <math>Cat</math>. The domain can be further restricted to a full subcategory <math>OOC</math> of <math>Cat</math>, consisting of all the 1-object categories. We can also define a functor <math>U:OOC\to Monoid</math> by <math>U(C) = C_1</math> with the monoidal structure on <math>U(C)</math> given by the composition in <math>C</math>. For an arrow <math>F:A\to B</math> we define <math>U(F) = F_1</math>. These functors take a monoid, builds a one-object category, and hits all of them; and takes a one-object category and builds a monoid. Both functors respect the monoidal structures - yet these are not an isomorphism pair. The clou here is that our construction of <math>C(M)</math> from <math>M</math> requires us to choose something for the one object of the category. And choosing different objects gives us different categories. These functors take a monoid, builds a one-object category, and hits all of them; and takes a one-object category and builds a monoid. Both functors respect the monoidal structures - yet these are not an isomorphism pair. The clou here is that our construction of <math>C(M)</math> from <math>M</math> requires us to choose something for the one object of the category. And choosing different objects gives us different categories. Line 126: Line 124: '''Example''' Recall our original definition of a graph as two collections and two maps between them. We can define a category GraphS: '''Example''' Recall our original definition of a graph as two collections and two maps between them. We can define a category GraphS: - <math>A \overset{\to}{\to} B</math> + :[[Image:ArightrightarrowsB.gif|A two right arrows B]] with the two arrows named <math>s</math> and <math>t</math>. It is a finite category with 2 objects, and 4 arrows. with the two arrows named <math>s</math> and <math>t</math>. It is a finite category with 2 objects, and 4 arrows. Now, a ''small graph'' can be defined to be just a Functor <math>GraphS \to Set</math>. Now, a ''small graph'' can be defined to be just a Functor <math>GraphS \to Set</math>. Line 133: Line 131: The idea, there, is that theories are modelled by specific categories - such as <math>GraphS</math> above, and actual instances of the objects they model appear as functors. The idea, there, is that theories are modelled by specific categories - such as <math>GraphS</math> above, and actual instances of the objects they model appear as functors. + + With this definition, since a graph is just a functor <math>GraphS \to Set</math>, and we get graph homomorphisms ''for free'': a graph homomorphism is just a natural transformation. + + And anything we can prove about functors and natural transformations thus immediately gives a corresponding result for graphs and graph homomorphisms. On Wikipedia, see: [http://en.wikipedia.org/wiki/Natural_transformation]. Sketch theory, alas, has a painfully incomplete Wikipedia article. On Wikipedia, see: [http://en.wikipedia.org/wiki/Natural_transformation]. Sketch theory, alas, has a painfully incomplete Wikipedia article. Line 140: Line 142: The process of forming homsets within a category <math>C</math> gives, for any object <math>A</math>, two different functors <math>Hom(A,-)</math>: <math>X\mapsto Hom(A,X)</math> and <math>Hom(-,A): X\mapsto Hom(X,A)</math>. Functoriality for <math>Hom(A,-)</math> is easy: <math>Hom(A,f)</math> is the map that takes some <math>g:A\to X</math> and transforms it into <math>fg:A\to Y</math>. The process of forming homsets within a category <math>C</math> gives, for any object <math>A</math>, two different functors <math>Hom(A,-)</math>: <math>X\mapsto Hom(A,X)</math> and <math>Hom(-,A): X\mapsto Hom(X,A)</math>. Functoriality for <math>Hom(A,-)</math> is easy: <math>Hom(A,f)</math> is the map that takes some <math>g:A\to X</math> and transforms it into <math>fg:A\to Y</math>. - Functoriality for <math>Hom(-,A)</math> is more involved. We can view this as a functor either from <math>C^op</math>, or as a different kind of functor. If we just work with <math>C^op</math>, then no additional definitions are needed - but we need an intuition for the dual categories. + Functoriality for <math>Hom(-,A)</math> is more involved. We can view this as a functor either from <math>C^{op}</math>, or as a different kind of functor. If we just work with <math>C^{op}</math>, then no additional definitions are needed - but we need an intuition for the dual categories. Alternatively, we introduce a new concept of a contravariant functor. A contravariant functor <math>F:C\to D</math> is some map of categories, just like a functor is, but such that <math>F(1[X]) = 1[F(X)]</math>, as usual, but such that for a <math>f:A\to B</math>, the functor image is some <math>F(f):F(B)\to F(A)</math>, and the composition is <math>F(gf) = F(f)F(g)</math>. The usual kind of functors are named ''covariant''. Alternatively, we introduce a new concept of a contravariant functor. A contravariant functor <math>F:C\to D</math> is some map of categories, just like a functor is, but such that <math>F(1[X]) = 1[F(X)]</math>, as usual, but such that for a <math>f:A\to B</math>, the functor image is some <math>F(f):F(B)\to F(A)</math>, and the composition is <math>F(gf) = F(f)F(g)</math>. The usual kind of functors are named ''covariant''. ## Current revision These notes cover material dispersed in several places of Awodey. The definition of a functor is on page 8. More on functors and natural transformations comes in sections 7.1-7.2, 7.4-7.5, 7.7-7.10. ## Contents ### 1 Functors We've spent quite a bit of time talking about categories, and special entities in them - morphisms and objects, and special kinds of them, and properties we can find. And one of the main messages visible so far is that as soon as we have an algebraic structure, and homomorphisms, this forms a category. More importantly, many algebraic structures, and algebraic theories, can be captured by studying the structure of the category they form. So obviously, in order to understand Category Theory, one key will be to understand homomorphisms between categories. #### 1.1 Homomorphisms of categories A category is a graph, so a homomorphism of a category should be a homomorphism of a graph that respect the extra structure. Thus, we are led to the definition: Definition A functor $F:C\to D$ from a category C to a category D is a graph homomorphism F0,F1 between the underlying graphs such that for every object $X\in C_0$: • $F_1(1_X) = 1_{F_0(X)}$ • F1(gf) = F1(g)F1(f) Note: We shall frequently use F in place of F0 and F1. The context should suffice to tell you whether you are mapping an object or a morphism at any given moment. On Wikipedia: [1] ##### 1.1.1 Examples A homomorphism $f:M\to N$ of monoids is a functor of the corresponding one-object categories $F:C(M)\to C(N)$. The functor takes the single object to the single object, and acts on morphisms by F(g) = f(g). A homomorphism $f:P\to Q$ of posets is a functor $F:C(P)\to C(Q)$ of the corresponding category. We have $f(x)\leq f(y)$ if $x\leq y$, so if $g\in Hom_P(x,y)$ then $F(g)\in Hom_Q(f(x),f(y))$. If we pick some basis for every vector space, then this gives us a functor F from Vect to the category with objects integers and morphisms matrices by: • F0(V) = dimV • F1(f) is the matrix representing f in the matrices chosen. This example relies on the axiom of choice. #### 1.2 Interpreting functors in Haskell One example of particular interest to us is the category Hask. A functor in Hask is something that takes a type, and returns a new type. Not only that, we also require that it takes arrows and return new arrows. So let's pick all this apart for a minute or two. Taking a type and returning a type means that you are really building a polymorphic type class: you have a family of types parametrized by some type variable. For each type a , the functor data F a = ... will produce a new type, F a . This, really, is all we need to reflect the action of F0. The action of F1 in turn is recovered by requiring the parametrized type F a to implement the Functor typeclass. This typeclass requires you to implement a function fmap ::i (a -> b) -> F a -> F b . This function, as the signature indicates, takes a function f :: a -> b and returns a new function fmap f :: F a -> F b . The rules we expect a Functor to obey seem obvious: translating from the categorical intuition we arrive at the rules • fmap id = id and • fmap (g . f) = fmap g . fmap f Now, the real power of a Functor still isn't obvious with this viewpoint. The real power comes in approaching it less categorically. A Haskell functor is a polymorphic type. In a way, it is an prototypical polymorphic type. We have some type, and we change it, in a meaningful way. And the existence of the Functor typeclass demands of us that we find a way to translate function applications into the Functor image. We can certainly define a boring Functor, such as ```data Boring a = Boring instance Functor Boring where fmap f = const Boring``` but this is not particularly useful. Almost all Functor instances will take your type and include it into something different, something useful. And it does this in a way that allows you to lift functions acting on the type it contains, so that they transform them in their container. And the choice of words here is deliberate. Functors can be thought of as data containers, their parameters declaring what they contain, and the fmap implementation allowing access to the contents. Lists, trees with node values, trees with leaf values, Maybe , Either all are Functor s in obvious manners. ```data List a = Nil | Cons a (List a) instance Functor List where fmap f Nil = Nil fmap f (Cons x lst) = Cons (f x) (fmap f lst)   data Maybe a = Nothing | Just a instance Functor Maybe where fmap f Nothing = Nothing fmap f (Just x) = Just (f x)   data Either b a = Left b | Right a instance Functor (Either b) where fmap f (Left x) = Left x fmap f (Right y) = Right (f y)   data LeafTree a = Leaf a | Node [LeafTree a] instance Functor LeafTree where fmap f (Node subtrees) = Node (map (fmap f) subtrees) fmap f (Leaf x) = Leaf (f x)   data NodeTree a = Leaf | Node a [NodeTree a] instance Functor NodeTree where fmap f Leaf = Leaf fmap f (Node x subtrees) = Node (f x) (map (fmap f) subtrees)``` ### 2 The category of categories We define a category Cat by setting objects to be all small categories, and arrows to be all functors between them. Being graph homomorphisms, functors compose, their composition fulfills all requirements on forming a category. It is sometimes useful to argue about a category CAT of all small and most large categories. The issue here is that allowing $CAT\in CAT_0$ opens up for set-theoretical paradoxes. #### 2.1 Isomorphisms in Cat and equivalences of categories The definition of an isomorphism holds as is in Cat. However, isomorphisms of categories are too restrictive a concept. To see this, recall the category Monoid, where each object is a monoid, and each arrow is a monoid homomorphism. We can form a one-object category out of each monoid, and the method to do this is functorial - i.e. does the right thing to arrows to make the whole process a functor. Specifically, if $h:M\to N$ is a monoid homomorphism, we create a new functor $C(h):C(M)\to C(N)$ by setting C(h)0( * ) = * and C(h)1(m) = h(m). This creates a functor from Monoid to Cat. The domain can be further restricted to a full subcategory OOC of Cat, consisting of all the 1-object categories. We can also define a functor $U:OOC\to Monoid$ by U(C) = C1 with the monoidal structure on U(C) given by the composition in C. For an arrow $F:A\to B$ we define U(F) = F1. These functors take a monoid, builds a one-object category, and hits all of them; and takes a one-object category and builds a monoid. Both functors respect the monoidal structures - yet these are not an isomorphism pair. The clou here is that our construction of C(M) from M requires us to choose something for the one object of the category. And choosing different objects gives us different categories. Thus, the composition CU is not the identity; there is no guarantee that we will pick the object we started with in the construction in C. Nevertheless, we would be inclined to regard the categories Monoid and OOC as essentially the same. The solution is to introduce a different kind of sameness: Definition A functor $F:C\to D$ is an equivalence of categories if there is a functor $G:D\to C$ and: • A family $u_C:C\to G(F(C))$ of isomorphisms in C indexed by the objects of C, such that for every arrow $f:C\to C': G(F(f)) = u_{C'}\circ f\circ u_C^{-1}$. • A family $u_D:D\to F(G(D))$ of isomorphisms in D indexed by the objects of D, such that for every arrow $f:D\to D': F(G(f)) = u_{D'}\circ f\circ u_D^{-1}$. The functor G in the definition is called a pseudo-inverse of F. #### 2.2 Natural transformations The families of morphisms required in the definition of an equivalence show up in more places. Suppose we have two functors $F:A\to B$ and $G:A\to B$. Definition A natural transformation $\alpha:F\to G$ is a family of arrows $\alpha a:F(a)\to G(a)$ indexed by the objects of A such that for any arrow $s:a\to b$ in $G(s)\circ\alpha a = \alpha b\circ F(s)$ (draw diagram) The commutativity of the corresponding diagram is called the naturality condition on α, and the arrow αa is called the component of the natural transformation α at the object a. Given two natural transformations $\alpha: F\to G$ and $\beta: G\to H$, we can define a composition $\beta\circ\alpha$ componentwise as $(\beta\circ\alpha)(a) = \beta a \circ \alpha a$. Proposition The composite of two natural transformations is also a natural transformation. Proposition Given two categories C,D the collection of all functors $C\to D$ form a category Func(C,D) with objects functors and morphisms natural transformations between these functors. Note that this allows us to a large degree to use functors to define entities we may otherwise have defined using large and involved definitions. Doing this using the categorical language instead mainly gives us a large number of facts for free: we don't have to verify, say, associativity of composition of functors if we already know them to be functors. Example Recall our original definition of a graph as two collections and two maps between them. We can define a category GraphS: with the two arrows named s and t. It is a finite category with 2 objects, and 4 arrows. Now, a small graph can be defined to be just a Functor $GraphS \to Set$. In order to define more intricate structures this way, say Categories, or algebraic structures, we'd need more tools - which we shall find in later lectures. This approach to algebraic definition develops into an area called Sketch theory. The idea, there, is that theories are modelled by specific categories - such as GraphS above, and actual instances of the objects they model appear as functors. With this definition, since a graph is just a functor $GraphS \to Set$, and we get graph homomorphisms for free: a graph homomorphism is just a natural transformation. And anything we can prove about functors and natural transformations thus immediately gives a corresponding result for graphs and graph homomorphisms. On Wikipedia, see: [2]. Sketch theory, alas, has a painfully incomplete Wikipedia article. ### 3 Properties of functors The process of forming homsets within a category C gives, for any object A, two different functors Hom(A, − ): $X\mapsto Hom(A,X)$ and $Hom(-,A): X\mapsto Hom(X,A)$. Functoriality for Hom(A, − ) is easy: Hom(A,f) is the map that takes some $g:A\to X$ and transforms it into $fg:A\to Y$. Functoriality for Hom( − ,A) is more involved. We can view this as a functor either from Cop, or as a different kind of functor. If we just work with Cop, then no additional definitions are needed - but we need an intuition for the dual categories. Alternatively, we introduce a new concept of a contravariant functor. A contravariant functor $F:C\to D$ is some map of categories, just like a functor is, but such that F(1[X]) = 1[F(X)], as usual, but such that for a $f:A\to B$, the functor image is some $F(f):F(B)\to F(A)$, and the composition is F(gf) = F(f)F(g). The usual kind of functors are named covariant. A functor $F:C\to D$ is faithful if the induced mapping $Hom_C(A,B) \to Hom_D(FA, FB)$ is injective for all $A,B\in C_0$. A functor $F:C\to D$ is full if the induced mapping $Hom_C(A,B) \to Hom_D(FA, FB)$ is surjective for all $A,B\in C_0$. Note that a full subcategory is a subcategory so that the embedding functor is both full and faithful. See also entries in the list of Types of Functors on the Wikipedia page for Functors [3]. #### 3.1 Preservation and reflection We say that a functor $F:C\to D$ preserves a property P, if whenever P holds in the category C, it does so for the image of F in D. Thus, the inclusion functor for the category of finite sets into the category of sets preserves both monomorphisms and epimorphisms. Indeed, these properties, in both categories, correspond to injective and surjective functions, respectively; and a surjective (injective) function of finite sets is still surjective (injective) when considered for sets in general. As another example, consider the category 2 given by $A\to B$, with the single non-identity arrow named f. All arrows in this category are both monomorphic and epimorphic. We can define a functor $F:2\to Set$ through $A\mapsto \{1,2\}$, $B\mapsto \{3,4\}$ and f mapping to the set function that takes all elements to the value 3. The resulting constant map F(f) is neither epic nor monic, while the morphism f is both. However, there are properties that functors do preserve: Proposition Every functor preserves isomorphisms. Proof Suppose $f:X\to Y$ is an isomorphism with inverse f − 1. Then F(f) has inverse F(f − 1). Indeed, F(f)F(f − 1) = F(ff − 1) = F(1Y) = 1F(Y) and F(f − 1)F(f) = F(f − 1f) = F(1X) = 1F(X). QED We say that a functor $F:C\to D$ reflects a property P, if whenever F(f) has that property, so does f. A functor $F:C\to D$ is representative if every object in D is isomorphic to some F(X) for $X\in C_0$. ### 4 Homework Complete homework is by 4 out of 9 complete solutions. Partial credit will be given. 1. Show that the category of vectorspaces is equivalent to the category with objects integers and arrows matrices. 2. Prove the propositions in the section on natural transformations. 3. Prove that listToMaybe :: [a] -> Maybe a is a natural transformation from the list functor to the maybe functor. Is catMaybes a natural transformation? Between which functors? 4. Find two more natural transformations defined in the standard Haskell library. Find one polymorphic function that is not a natural transformation. 5. Write a functor instance for data F a = F (Int -> a) 6. Write a functor instance for data F a = F ((Int -> a) -> Int) 7. Write a natural transformation from Maybe a to Either () a . Is this a natural isomorphism? If so, what is its inverse? If not, why not? 8. Write a natural transformation from [a] to (Maybe a, Maybe a) . Is this a natural isomorphism? If so, what is its inverse? If not, why not? 9. * Recall that a category is called discrete if it has no arrows other than the identities. Show that a small category A is discrete if and only if every set function $A_0\to B_0$, for every small category B, is the object part of a unique functor $A\to B$. Analogously, we define a small category B to be indiscrete if for every small category A, every set function $A_0\to B_0$ is the object part of a unique functor $A\to B$. Characterise indiscrete categories by the objects and arrows they have. 10. * We could write a pretty printer, or XML library, using the following data type as the core data type: ```data ML a = Tag a (ML a) | Str String | Seq [ML a] | Nil``` With this, we can use a specific string-generating function to generate the tagged marked up text, such as, for instance: ```prettyprint (Tag tag ml) = "<" ++ show tag ++ ">" ++ prettyprint ml ++ "</" ++ show tag ++ ">" prettyprint (Str s) = s prettyprint (Seq []) = "" prettyprint (Seq (m:ms)) = prettyprint m ++ "\n" ++ prettyprint (Seq ms) prettyprint Nil = ""``` Write an instance of Functor that allows us to apply changes to the tagging type. Then, using the following tagging types: ```data HTMLTag = HTML | BODY | P | H1 | CLASS String deriving (Show) data XMLTag = DOCUMENT | HEADING | TEXT deriving (Show)``` write a function htmlize :: ML XMLTag -> ML HTMLTag and use it to generate a html document out of: ```Tag DOCUMENT Seq [ Tag HEADING String "Nobel prize for chromosome find", Tag TEXT String "STOCKHOLM (Reuters) - Three Americans won the Nobel prize for medicine on Monday for revealing the existence and nature of telomerase, an enzyme which helps prevent the fraying of chromosomes that underlies aging and cancer.", Tag TEXT String "Australian-born Elizabeth Blackburn, British-born Jack Szostak and Carol Greider won the prize of 10 million Swedish crowns ($1.42 million), Sweden's Karolinska Institute said.", Tag TEXT String "'The discoveries ... have added a new dimension to our understanding of the cell, shed light on disease mechanisms, and stimulated the development of potential new therapies,' it said.", Tag TEXT String "The trio's work laid the foundation for studies that have linked telomerase and telomeres -- the small caps on the end of chromosomes -- to cancer and age-related conditions.", Tag TEXT String "Work on the enzyme has become a hot area of drug research, particularly in cancer, as it is thought to play a key role in allowing tumor cells to reproduce out of control.", Tag TEXT String "One example, a so-called therapeutic vaccine that targets telomerase, in trials since last year by drug and biotech firms Merck and Geron, could yield a treatment for patients with tumors including lung and prostate cancer.", Tag TEXT String "The Chief Executive of Britain's Medical Research Council said the discovery of telomerase had spawned research of 'huge importance' to the world of science and medicine.", Tag TEXT String "'Their research on chromosomes helped lay the foundations of future work on cancer, stem cells and even human aging, areas that continue to be of huge importance,' Sir Leszek Borysiewicz said in a statement." ]```
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 61, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8730083703994751, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/171569-proving-convergence-bn-given.html
# Thread: 1. ## Proving convergence of {an/bn} given... Theorem: If {an} is bounded and {bn} tends to infinity with bn not equal to 0 for all n, then {an/bn} converges to 0. I am having trouble understanding why this theorem is true. Can somebody please provide me with a proof? 2. Originally Posted by gwiz Theorem: If {an} is bounded and {bn} tends to infinity with bn not equal to 0 for all n, then {an/bn} converges to 0. I am having trouble understanding why this theorem is true. Can somebody please provide me with a proof? $-\dfrac{K}{|b_n|}\le \left| \dfrac{a_n}{b_b} \right| \le \dfrac{K}{|b_n|}$ where $K \ge |a_n|$ for all $$$n \in \mathbb{N}$ CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9093095660209656, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/53213/why-wouldnt-hubbles-law-be-directly-in-units-of-frequency
# Why wouldn't Hubble's Law be directly in units of frequency? Maybe using this as an example: The energy of a particular color of yellow light is $3.44 \times 10^{-22}$ $kJ$ So if I want to find frequency of that light, I take that number, divide by $h$ and I get $5.19 \times 10^{-14}$ (or something like that) which is in what units? Hz? What if I wanted to know the frequency of the light after travelling say 1 billion light years? Is there a version of Hubble's Law (and parameter I'd assume) in the form of $f_\text{new} = f_\text{original} \times H_u$ where $H_u$ is a version of Hubble's Parameter in units appropriate for the units of frequency? - "The energy of a particular color of yellow light is 3.44E-22 kJ" ::facepalm:: OK. So that's a little badly stated. Presumably the correspondent meant that the energy of a single photon was such and so. In any case we can't answer your questions (what units?) until you tell us the units of the value of $h$ you used. And once you've done that the answer should be clear, no? – dmckee♦ Feb 6 at 16:03 ## 2 Answers Really the quantity you are seeking is redshift, not the Hubble parameter. Redshift $z$ is defined by $$f_\text{observed} = \frac{f_\text{emitted}}{1+z}.$$ Now, if you want to relate redshift to proper distance $D$, you can use the Hubble relation $$H_0 D = cz,$$ which means $$f_\text{observed} = \frac{f_\text{emitted}}{1+H_0D/c}.$$ For distances large compared to $c/H_0$, this becomes $$f_\text{observed} \approx \frac{c}{H_0D} f_\text{emitted}.$$ - @mikethematrix To come to my point of view re your question, let me first make sure I interpret your question correctly. Hubble's law for the expansion of the universe states that the galaxies are receding from one another at speeds that are proportional to their distance, d, between them. Then, for two galaxies at distance d we have $v=Hd.$ Clearly the 'constant' H has units $s^{-1}$. Are you asking why could we not put $H$ in units of $Hz$ instead of having it in $s^{-1}$, probably motivated by the equation $v=\omega R$ from circualr motion and perhaps inspired by Planck's law $E=hf$? If this is what you mean, then you need an answer that is different from that given by Chris. In my opinion, the answer in this case is, no, you cannot put $H$ in $Hz$. The reason is that Hubble's law does not describe a periodical phenomenon like in a circular motion. So the 'constant' $H$ is far more fundamental than the frequency of a periodical phenomenon. At any time t, it relates to the age of the universe: Age of universe = $\frac 1H$. If this is not how you meant your question, then Chris's answer is a nice answer which I have increased its vote by 1. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552258253097534, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/64245/list
## Return to Answer 2 amplify Edit: In your extended question, you mention that the usual argument only guarantees that the power set $2^A$ has "one more" element than the original set $A$: given a map $\phi : A \to 2^A$, there is at least one element not in the image of $\phi$. Call that element $x$. Certainly there is a bijection between $A$ and $A \cup {x}$, so repeating your argument shows there isn't a bijection between $A \cup {x}$ and $2^A$; at least one element is missed. So in fact $2^A$ has "two more" elements than $A$. Repeating this, one sees there are infinitely many elements missed. Moreover, by showing that $A \times A$ cannot be mapped onto $2^A$, there are at least "$|A|$ more" elements in $2^A$, and so on. This is what I was trying to get at in my first paragraph. Of course, it has to be understood in the context that adding one element to an infinite set doesn't actually make it any larger in the sense of cardinality. To my mind, cardinality is a very crude measure of the "size" of an infinite set, and so any operation that enlarges it to an extent that it can be detected by cardinality must be a very dramatic enlargement indeed. 1 For infinite sets, $A$ having larger cardinality than $B$ already implies that $A$ is "much larger," and perhaps a good way to see this is to think of all the ways of making $B$ "larger" which don't increase its cardinality. E.g. taking the union with any set of equal or smaller cardinality, taking the Cartesian product with a set of equal or smaller cardinality, etc. Perhaps you want to know if there are a lot of other cardinalities in between those of $B$ and $2^B$. This question can't be answered with the usual axioms of set theory. For more information, read about the generalized continuum hypothesis.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9721669554710388, "perplexity_flag": "head"}
http://mathoverflow.net/questions/101169?sort=votes
## Not especially famous, long-open problems which higher mathematics beginners can understand ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is a pair to http://mathoverflow.net/questions/100265/not-especially-famous-long-open-problems-which-anyone-can-understand So this time I'm asking for open questions so easy to state for students of subjects such as undergraduate abstract algebra, linear algebra, real analysis, topology (etc.?) that a teacher could mention the questions almost immediately after stating very basic definitions. If they weren't either too famous or already solved, appropriate answers might include the Koethe Nil Conjecture, the existence of odd order finite simple groups or the continuum hypothesis. Thanks - 2 In other words, delete now and re-ask in a month? I agree that this is the best strategy for David, if he wants the best answers (people are kind of exhausted of this sort of question now), but it seems strange to close it against his will. – Steven Gubkin Jul 2 at 19:47 >...some of the answers to alvarezpaiva's question would fit here (eg, normal numbers) Re normal numbers, I think not. Since one does not usually emphasize radix expansions in basic abstract algebra or real analysis and certainly one doesn't first meet such expansions at university, that question would not obviously have pedagogical value in the specified context. I mentioned Koethe's conjecture because as soon as the student has the definition of ring, ideal and left ideal a teacher can have this open problem ready to "trouble" those notions for the quickest students. – David Feldman Jul 2 at 20:39 Thanks for the advice, but for now I'll take my chances. Moderation is good for the site, but censorship isn't. So I think it would set a bad precedent now for me to fold pre-emptively. – David Feldman Jul 2 at 20:41 3 Started META: meta.mathoverflow.net/discussion/1540/… – Alexander Chervov Feb 22 at 7:36 I deleted several of my comments dating from last July as they meanwhile completely lack context. (The 'conversation' gets sort of destroyed but this feels not so necessary, too. If your really care thee is "back-up" on meta.) – quid Feb 22 at 23:35 ## 4 Answers The Hot spot conjecture The conjecture seems quite amazing and simple to formulate, it can be even understood by persons "from the street" seems its prediction can be tested experimentally. It is a subject of "polymath project 7". Let me quote: The hotspots conjecture can be expressed in simple English as: Suppose a flat piece of metal, represented by a two-dimensional bounded connected domain, is given an initial heat distribution which then flows throughout the metal. Assuming the metal is insulated (i.e. no heat escapes from the piece of metal), then given enough time, the hottest point on the metal will lie on its boundary. In mathematical terms, we consider a two-dimensional bounded connected domain D and let u(x,t) (the heat at point x at time t) satisfy the heat equation with Neumann boundary conditions. We then conjecture that For sufficiently large t > 0, u(x,t) achieves its maximum on the boundary of D This conjecture has been proven for some domains and proven to be false for others. In particular it has been proven to be true for obtuse and right triangles, but the case of an acute triangle remains open. The proposal is that we prove the Hot Spots conjecture for acute triangles! Note: strictly speaking, the conjecture is only believed to hold for generic solutions to the heat equation. As such, the conjecture is then equivalent to the assertion that the generic eigenvectors of the second eigenvalue of the Laplacian attain their maximum on the boundary. A stronger version of the conjecture asserts that For all non-equilateral acute triangles, the second Neumann eigenvalue is simple; and The second Neumann eigenfunction attains its extrema only at the boundary of the triangle. (In fact, it appears numerically that for acute triangles, the second eigenfunction only attains its maximum on the vertices of the longest side.) ========================================================== May be this problem can be mentioned when teaching determinants and in particular: $\det(AB)= \det(A)\det(B).$ There are so-called Capelli identities which generalize this formula for specific matrices with non-commutative entries. In the paper Noncommutative determinants, Cauchy-Binet formulae, and Capelli-type identities by Sergio Caracciolo, Andrea Sportiello, Alan D. Sokal they formulate certain conjectures of the type $$\det(A)\det(B)=\det(AB+\text{correction})$$ on the page 36 (bottom), conjectures 5.1, 5.2. I think these are quite non-trivial, but probably some smart young mathematician may solve them, given some amount of time (some months may be). I spent some amount of time thinking on them without success, and moreover let me mention that D. Zeileberger and D. Foata also failed to find a combinatorial proof of the Capelli identity of very similar type -- the one proved by Kostant-Sahi and Howe-Umeda -- see their comments in Combinatorial Proofs of Capelli's and Turnbull's Identities from Classical Invariant Theory page 9 bottom: "Although we are unable to prove the above identity combinatorially ... ". So words above are some idications of non-triviality of the conjectures. Personally I am quite interested in a proof, probably it can give clue for further generalizations. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. These two are probably my favourite. We all know that cardinals are ordered by injections, i.e. $|A|\leq|B|$ if there is an injection from $A$ into $B$. But it's clear that we can also order the cardinals by surjections, $|A|\leq^\ast|B|$ if $A$ is empty, or $B$ can be mapped onto $A$. Assuming the axiom of choice these two notions are obviously equivalent. But what happens without the axiom of choice? Well, it is easy enough to give counterexamples that $|A|\leq^\ast|B|$ and $|A|\nleq|B|$ (e.g. Dedekind-finite sets that can be mapped onto $\omega$; $\Bbb R$ can be mapped onto $\aleph_1$ in Solovay's model). The Partition Principle. If $|A|\leq^\ast|B|$ then $|A|\leq|B|$. For over a century now it is open whether or not this principle implies the axiom of choice in ZF. Russell claimed to have a proof, but it was never published. The second open problem is also related to cardinals and their order: Assuming the axiom of choice all the cardinals are ordinals, and since there are no decreasing sequences of ordinals, the $\leq$ relation is well-founded. But what happens without the axiom of choice? Well we know that every partial order can be realized as cardinals of some model of ZF (in fact we can assume dependent choice to hold as high as we want to). An immediate consequence is that it is consistent that the cardinals are not well-founded. Even the existence of a Dedekind-finite set implies this easily, because removing one element decreases the cardinality and so we can find a decreasing sequence of cardinals (but we can't find a decreasing chain of subsets!). The question whether or not the assumption that there is no decreasing sequence of cardinals implies the axiom of choice in ZF is open, for about a century as well. Unlike the previous problem where pretty much everyone feels that the answer is positive, this problem had people arguing for both sides. Some people suggested that it will imply choice, others suggested that it will not imply the axiom of choice. But no one has an answer yet. - 2 I have this huge fantasy that the partition principle will not prove the axiom of choice, that would be epic in ways beyond words. I'd love for that to happen. – Asaf Karagila Feb 22 at 23:46 In the absence of DC, one might modify your second question to ask about the status of "The ordering of cardinals is well-founded." – Andreas Blass Feb 23 at 13:43 Andreas, I agree. But that's the historical question. An interesting question which I have never really given my full attention (for a sufficient time, anyway) is whether or not when the cardinals are not well-founded there is a decreasing sequence. The only case to verify is when there are no Dedekind-finite sets, and DC still fails. – Asaf Karagila Feb 23 at 16:14 I think the following were already mentioned in answers to http://mathoverflow.net/questions/100265/not-especially-famous-long-open-problems-which-anyone-can-understand Are there circulant Hadamard matrices of degree > 4? What is the actual value of R(5,5)? The integer brick problem. - 2 I must confess I don't see that much is gained by having this separate question in addition to the one I linked to. – Yemon Choi Jul 2 at 23:21 Then, perhaps you could suggest that the questions should be merged ;) – quid Jul 2 at 23:35 3 Professional mathematicians rarely do anything to their parents. – Gerry Myerson Jul 3 at 5:56 2 My parents have tapes of talks of mine. They play them (perhaps the first one each time) when they have trouble sleeping. – Douglas Zare Jul 3 at 7:16 1 If I wasn't actually clear enough -- In my mind the difference between this question and my previous one comes down to this: what might I say to suggest to my American state university mathematics majors in their junior year what it is that professional mathematicians do, versus what might I say in the direction of accomplishing the same when speaking to parents of prospective frosh. – David Feldman Jul 5 at 16:43 Showing that if product of $n$ Toeplitz operators is again a Toeplitz Operator.this is open and quite elementary to state. http://www.mathnet.or.kr/mathnet/thesis_file/WYLee.pdf - Would you be so kind to give more details ? what is n Toeplitz ? – Alexander Chervov Feb 23 at 9:22 What are the references ? – Alexander Chervov Feb 23 at 9:22 2 You reference asks for necessary and sufficient conditions that will make the product of Toeplitz operators a Toeplitz operator. – David Feldman Feb 23 at 22:58 yes. most of the things are not known about toeplitz operator. I find most of operator theory like number theory.beautiful,devoid of much applications and abound in difficult problems – Koushik Mar 11 at 2:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379890561103821, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/93909/lebesgue-integral-with-respect-to-vector-measures
## Lebesgue integral with respect to vector measures? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Good evening, I'm reading some papers of Jim Agler and Nicholas Young, in which they prove a formula of integral representation with respect to a vector measure, but the integration is in the sense of Riemann, not of Lebesgue. And the proofs for Riemann integrals are often long. Secondly, in the book of Rudin, Functional Analysis, the author doesn't define an integral of Lebesgue's type with respect to the spectral measure. In stead, he always wants the readers to understand the integral with respect to the spectral measure as in the scalar case. Precisely, let $T$ be a normal operator on a hilbert space $H,$ and let $T = \int_{z\in\sigma(T)} z dE(z)$ be the spectral decomposition of $T.$ The integral has to be understood as $\langle Tx,y\rangle = \int_{\sigma(T)}z dE_{x,y}(z)$, where $E_{x,y}$ is the scalar measure defined by $E_{x,y}(\omega) = \langle E(\omega)x,y\rangle$ for all $\omega$ borelian sets of $\sigma(T)$, $x, y \in H$ and $\langle \cdot,\cdot\rangle$ the inner product of $H.$ My questions : Can these integrals be understood in the sense of Lebesgue? What is a good introductory reference for the theory of Lebesgue integral with respect to vector measures (of course, if this theory exists)? What are difficulties when we construct such theory? Maybe, my questions are not well written, because of my limited english knowledge. I hope you understand my post. Any help is appreciated. Thanks in advance, Duc Anh - en.wikipedia.org/wiki/Bochner_integral – Marc Palm May 23 at 9:23 ## 2 Answers Marc Rieffel has some notes that develop integration with respect to Banach-space valued measures from the ground up. The notes are very thorough. They are available here: http://math.berkeley.edu/~rieffel/measinteg.html Lectures notes from 1970 for the first-year graduate-level analysis course on measures and integration at UC Berkeley that I gave several times during the late 1960's can be found here. The notable feature of the notes is that they treat the Bochner integral from the beginning, in a quite elementary way (e.g. no mention of the Hahn-Banach theorem). This has both practical and pedagogical advantages. Not all lectures listed in the table of contents were ever typed up. The origin of these lecture notes lies in the turmoil on the UC Berkeley campus in the late 1960's, when there were periods of time when students indicated that as a protest they did not want to come to class (and if they did try to come to class there was a significant probability that they would encounter tear-gas or worse), but they indicated that they wanted to continue their studies and so requested that written notes of the lectures they missed (if held at all) be made available to them. - thank you very much, I will read these notes – Đức Anh Apr 13 2012 at 8:58 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I didn't know the reference above :) So far, the last survey about it was due to J. Diestel and J. Uhl, Vector Measures. By the way no pdf. available. - Thank you very much. – Đức Anh May 23 at 13:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9541651010513306, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/87717-find-remainder.html
# Thread: 1. ## find the remainder find the remainder when- $\sum_{1}^{10}(n^{2}+n)n!$ is divided by 10. No idea. i just know the sum of n.n!=(n+1)!-1 2. Originally Posted by adhyeta find the remainder when- $\sum_{1}^{10}(n^{2}+n)n!$ is divided by 10. No idea. i just know the sum of n.n!=(n+1)!-1 Notice for $n \ge 5$ each term is divisable by 10 so they have a remainder of zero(why) Now we only need to worry about the first 4 terms We also don't need to worry about the 4 term (why) so we get $(2)1+(6)2+12(6)=86$ so the remainder is 6 3. Originally Posted by TheEmptySet Notice for $n \ge 5$ each term is divisable by 10 so they have a remainder of zero(why) Now we only need to worry about the first 4 terms We also don't need to worry about the 4 term (why) so we get $(2)1+(6)2+12(6)=86$ so the remainder is 6 yes for the fourth term too we have 16+4=20 (hence, divisible by 10) thank you.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9581847190856934, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/233880/what-is-the-distribution-of-sum-of-dependent-normal-random-variables?answertab=votes
# What is the distribution of sum of dependent normal random variables? If $X_1,\ldots,X_n$ are dependent normal random variables, what would be the distribution of $X_1+\ldots+Xn$? Is it still normal? - ## 1 Answer It depends on how they are dependent. The answer is yes if they are multivariate normal but not always in general. - @May: if they are independent (as in the first i of iid) then $\sum_i \sum_j X_i Y_j = \sum_i X_i \times \sum_j Y_j$. So if they are normally distributed then you have the product of two normal random variables, which is in general not normally distributed – Henry Nov 16 '12 at 0:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312795996665955, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/38495/exhaustive-nature-of-parallel-universe?answertab=votes
# Exhaustive nature of parallel universe [closed] We know that according to the infinite parallel universe theory there is a different universe for any event possible event. Now I was wondering if all those universes form an exhaustive set. In other words are all the events represented by those universes? - Didn't you just answer your own question? (by saying "there is a different universe for any possible event") – Dmitry Brant Sep 27 '12 at 17:10 ## closed as not a real question by Qmechanic♦, Manishearth♦, Emilio Pisanty, WarrickDec 26 '12 at 12:37 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, see the FAQ. ## 1 Answer The nearest we have to a quantitative theory of multiple universes is the string landscape and the conventional wisdom is that there are $10^{500}$ different possible universes (well, $10^{500}$ different sets of physical laws: you could have multiple universes with the same physical laws). So there are only a finite (though absurdly large!) number of different universes. Whether this forms an "exhaustive set" depends on what you mean by the term. Assuming the string landscape is correct, there can only be $10^{500}$ different types of universe so they form an exhaustive set by definition - there can be no other universes with different physical laws. However you could imagine a universe with properties not found in the set of $10^{500}$ universes. It wouldn't be realised in nature. Later: I've had a sudden thought that you might be referring to the Many Worlds Interpretation of quantum mechanics. If so that's a rather different meaning of multiple universes. It's more like a space of possible configurations of this universe. If our universe is infinite then every configuration will be realised somewhere. If our universe is finite (e.g. because it closes on some large scale) then there are only a finite number of possible configurations. - Well, not every configuration needs to be realized just because the universe is infinite. Also, to be finite number of configurations there can't be continuous values of time, space, or energy either. – jcohen79 Sep 27 '12 at 19:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402118921279907, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/5562/list
## Return to Answer 2 added 588 characters in body; added 13 characters in body There is a literature on "convergence spaces" of various kinds. I read some of that in the 70s, but I do not remember a lot of the detail. There is something called "pre-topological space" or "closure space". And there is "pseudo-topological space". Each of them can be defined in terms of convergence of filters. Or in terms of convergence of nets. Or in terms of neighborhood systems. Or in terms of a closure operation.. One of these is associated with Choquet. There is a big topology text Topological Spaces by Gál Cech that takes the closure space as the fundamental notion. Added November 16. From my old paper "Three Crypotisomorphism Theorems" in Studies in Foundations and Combinarorics, Advances in Mathematics Studies vol. 1, 1978, pp. 49--60 [Pretopology, closure space, mehrstufige Topologie, pré-adhérence] Axioms in terms of a closure operation $\eta$ from sets to sets: (b1) $A \subseteq \eta(A)$ (b2) $\eta(\emptyset) = \emptyset$ and $\eta(A \cup B) = \eta(A) \cup \eta(B)$. Of course to specify a topology, we have to add a third axiom $\eta(\eta(A)) = \eta(A)$. Without the third axiom, we get the more general pretopology. 1 [made Community Wiki] There is a literature on "convergence spaces" of various kinds. I read some of that in the 70s, but I do not remember a lot of the detail. There is something called "pre-topological space" or "closure space". And there is "pseudo-topological space". Each of them can be defined in terms of convergence of filters. Or in terms of convergence of nets. Or in terms of neighborhood systems. Or in terms of a closure operation.. One of these is associated with Choquet. There is a big topology text by Gál that takes the closure space as the fundamental notion.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9254123568534851, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/157336-finding-angle-x.html
# Thread: 1. ## Finding the angle x Hello everyone! I have a problem here: The question is simple-I have to find the value of x. So here are my workings: If you noticed any mistakes, then please inform me. Help is greatly appreciated. 2. It's fine 3. The answer as stated in the book is 141°. Do you think that it's wrong? 4. To me yes, it seems like the book is wrong. $x$ should be $147^{\circ}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9767462611198425, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/3057/is-there-a-topology-on-growth-rates-of-functions/3071
## Is there a topology on growth rates of functions? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I've often idly wondered one can say about the collection of "growth rates". By growth rate, let's say we mean an equivalence class of functions (0,infty) \to (0,\infty), where two functions f_1,f_2 are equivalent if f_1/f_2 and f_2/f_1 are bounded away from 0 and infinity. You can add, and multiply them, and they form a poset under the pordering where f_1 <= f_2 if f_1/f_2 is bounded above. So, in loose terms, does the sequence xlog(1+x), xlog(log(10+x)), xlog(log(log(100+x))), ... converge to x in some natural way? With a little thought you can construct a growth rate which is strictly greater than x and strictly less than all growth rates in that sequence, so it probably no. Still is there any sort of natural "topology"? Can you find a directed set of growth rates which are linearly ordered, and eventually smaller than anything larger than x? There's probably a better way to look at this (which is why I ask). =) - ## 6 Answers You might want to look at the theory of Hardy fields. These are fields of germs of functions at a neighborhood of infinity closed under differentiation. The classical reference is Hardy's book "Orders of infinity". You may also want to look at the works of Maxwell Rosenlicht. While Hardy's book dates back to the 1920's, Rosenlicht works are from the 1980's and you can found them through mathscinet. - 1 Some one ought to vote this up. (I already have.) The reference to Rosenlicht's work looks very useful, and extends my answer. – David Speyer Oct 28 2009 at 17:40 1 Bourbaki has a nice treatment of Hardy fields in his Volume 4 (Functions of a real variable). – Dmitri Pavlov Oct 28 2009 at 18:00 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is some fascinating work in the subject of cardinal characteristics of the continuum in set theory that directly relates to the concept of growth rates of functions. I believe that it is the ideas in this subject that are ultimately fundamental to your question. I explain a little about the general subject of cardinal characteristics in my answer here. Much of the interest of your question is already present for functions on the natural numbers. The two main orders on such functions that one considers in cardinal characteristics are • almost-less-than, where $f \lt^\ast g$ means that $f(n) \lt g(n)$ for all $n$ except finitely often, and • domination, where $f \lt g$ means that $f(n) \lt g(n)$ for all $n$. A family $F$ of functions is said to be unbounded if there is no function $g$ that has $f\lt^\ast g$ for all $f\in F$. That is, an unbounded family is a family that is not bounded with respect to $\lt^\ast$. A family $F$ is dominating if every function $f$ is dominated by some function $g\in F$. The corresponding cardinal characteristics of these two types of families are: • The bounding number $\frak{b}$ is the size of the smallest unbounded family. • The dominating number $\frak{d}$ is the size of the smallest dominating family. It is easy to see that $\frak{b}\leq\frak{d}$, simply because any dominating family is also unbounded. Also, both $\frak{b}$ and $\frak{d}$ are at most the continuum $\frak{c}$, the size of the reals. It is not difficult to see that both of these numbers must be uncountable, since for any countable family of functions $f_0,f_1, f_2,\ldots$, we can build the function $g(k) = \sup_{n\leq k}f_n(k)+1$, which eventually exceeds any particular $f_n$. In other words, any countable family of functions is bounded with respect either to almost-less-than or with respect to domination. Thus, $\omega_1\leq\frak{b},\frak{d}\leq\frak{c}$. It follows from these simple observations that if the Continuum Hypothesis holds, then both the bounding number and the dominating number are equal to $\omega_1$, which under CH is the same as the continuum $\frak{c}$. Now, the amazing thing is that ZFC independence abounds with these concepts. First, it is relatively consistent with ZFC that the Continuum Hypothesis fails, and both the dominating number and the bounding number are as large as they could possibly be, the continuum itself, so that $\frak{b}=\frak{d}=\frak{c}$. Second, it is also consistent that both are strictly intermediate between $\omega_1$ and the continuum $\frak{c}$, but still equal. Next, it is also consistent with $\text{ZFC}+\neg\text{CH}$ that the bounding number $\frak{b}$ is as small as it could be, namely $\omega_1$, but the domintating number is much larger, with value $\frak{c}$. The tools for proving all these results and many others involve the method of forcing. Now, let me get to the part of my answer that directly relates to the idea of rates-of-growth. A slalom is defined to be a sequence of natural number pairs $(a_0,b_0), (a_1,b_1), \ldots$ with $a_n\lt b_n$. Each slalom s corresponds to the collection of functions $f:\omega\to\omega$ such that $f(n)$ is in the interval $(a_n,b_n)$ for all but finitely many $n$. That is, imagine an olympic athlete on skiis, who must pass through (all but finitely many of) the slalom posts. An $h$-slalom is a slalom such that $|b_n-a_n|\leq h(n)$. Thus, a slalom is a growth rate of functions, in a very precise sense. With suitably chosen (countable collections of) slaloms, it is possible to express the concept of growth rate that you mentioned in your question. The set theory gets quite interesting. For example, a major question is: how many slaloms suffice to cover all the functions? This is particularly interesting when one restricts the size of the slaloms by considering $h$-slaloms. A fat slalom is a $2^n$-slalom, where the $n^{\rm th}$ interval has size at most $2^n$. It turns out that this is connected with ideas involving meagerness, otherwise known as category. For example, Bartoszynski proved that every set of reals of size less than $\kappa$ is meager if and only if for every function $h$ and every family of $h$-slaloms $F$ of size less than $\kappa$, there is a function $g$ eventually missing every slalom in $F$. In other words, the possibility of a family of fewer than $\kappa$ many $h$-slaloms covering all the functions is equivalent to every set of size less than $\kappa$ being meager. And so on. There is a large amount of work on these and similar ideas. An article particularly focused on slaloms would be this. And there is a survey article by Brendle on cardinal characteristics. - I finally texified this entry. – Joel David Hamkins Jan 3 2012 at 22:49 Joel, you will sleep better at night after this. – Will Jagy Jan 3 2012 at 23:14 You are right! It has taken me two years to do this! Finally, I will get some rest.... (But the fuller truth is that in the early days of MO, when I posted this answer, my computer refused to show any tex output except with lots of dollar signs and \fully\verbose\tex\macros, and so at that time I composed all my entries in html. But I noticed that some of the asterisks didn't come out properly in this answer, which caused mathematical errors in how my answer appeared, and so I finally texified today.) – Joel David Hamkins Jan 3 2012 at 23:35 You are going to run into issues because of oscilatory functions. For example, is x*(1+sin(x)) greater than or less than x in your order? GH Hardy proposed solving this problem by defining the logarithmico-exponential functions. These are all the functions on the real line which can be defined starting with the identity and scalars and applying addition, subtraction, multiplication, division, exponentiation and logarithm; with the proviso that we are only allowed to take the logarithm of a function if it is eventually positive. One of Hardy's main theorems is that every LE-function is either eventually positive, eventually negative or identically 0. So we can define a total order by f>g if f(x) - g(x) is eventually positive. Hardy then proposed studying a general function by studying the interval of LE-functions which are infinitely often greater than and infinitely often less than it. Unfortunately, I don't know a good modern reference on this subject. Hardy's book is online, but with pages missing. I am told that the buzzword for modern work on this subject is "Hardy fields", but that is the limit of my knowledge. - The complete text of Orders of Infinity is available for dowload from archive.org/details/ordersofinfinity00harduoft – Charles Stewart Jan 28 2010 at 10:56 I might be misremembering, but I believe the question of whether there is a cofinal totally ordered sequence of growth rates is independent of ZFC. It follows from CH (or more generally, Martin's Axiom) by a simple diagonalization argument, using the fact that for any countable set of growth rates, you can diagonalize them to get a growth rate which is faster than all of them. In any case, questions about the order structure of growth rates are highly set-theoretic in nature. - Yes, I think this is right, and I explain a bit more about some of it in my answer. – Joel David Hamkins Feb 2 2010 at 4:35 Every totally ordered set naturally gives rise to a topology; the basis of the topology is the set of open intervals and open rays, just as in the order definition of the topology on R. See the Wikipedia article. On the other hand, what you can say about this particular order probably depends on how strong you allow your axioms to be. - By the way, this order doesn't have the least upper bound property, so my guess is that no bounded nonconstant monotonic sequence converges. – Qiaochu Yuan Oct 28 2009 at 14:27 isn't the original question allowing partial order and not just total (linear) order? – Yemon Choi Oct 28 2009 at 18:11 Partial orders have a topology with exactly the same construction. But my understanding of the intent of the question is that he wanted to define "growth rate" in such a way that a total order was obtained. – Qiaochu Yuan Oct 28 2009 at 18:21 Then there are transseries. How about a plug ... "Transseries for Beginners" - 2 Pages 19-22 look particularly relevant. – David Speyer Oct 28 2009 at 18:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504777193069458, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/4733/simple-question-about-stochastic-differential?answertab=active
# Simple question about stochastic differential What is the equivalent of product rule for stochastic differentials? I need it in the following case: Let $X_t$ be a process and $\alpha(t)$ a real function. What would be $d(\alpha(t)X_t)$? - ## 1 Answer If $\alpha(t)$ is of finite variation, then the product rule is the same as in ordinary calculus: $$d(\alpha(t)X_t) = \alpha(t) dX_t + X_t d\alpha(t).$$ If you had $X_t$ and $Y_t$ as processes, you would get $$d(X_t Y_t) = X_t dY_t + Y_t dX_t + d [X,Y]_t.$$ If $Y$ has finite variation, the last quadratic covariation term is zero. The second equation is just applying Ito's Formula to $f(x,y) = xy$. - hi! can you give the non finite product rule also? – Nikos Dec 13 '12 at 12:31 1 The second equation is the general product rule. – quasi Dec 13 '12 at 15:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9611772298812866, "perplexity_flag": "head"}
http://mathoverflow.net/questions/18421?sort=newest
## How do they verify a verifier of formalized proofs? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In an unrelated thread Sam Nead intrigued me by mentioning a formalized proof of the Jordan curve theorem. I then found that there are at least two, made on two different systems. This is quite an achievement, but is it of any use for a mathematician like me? (No this is not what I am asking, the actual question is at the end.) I'd like to trust the theorems I use. To this end, I can read a proof myself (the preferred way, but sometimes hard to do) or believe experts (a slippery road). If I knew little about topology but occasionally needed the Jordan theorem, a machine-verified proof could give me a better option (and even if I am willing to trust experts, I could ensure that there are no hidden assumptions obvious to experts but unknown to me). But how to make sure that a machine verified the proof correctly? The verifying program is too complex to be trusted. A solution is of course that this smart program generates a long, unreadable proof that can be verified by a dumb program (so dumb that an amateur programmer could write or check it). I mean a program that performs only primitive syntax operations like "plug assertions 15 and 28 into scheme 9". This "dumb" part should be independent of the "smart" part. Given such a system, I could check axioms, definitions and the statement of the theorem, feed the dumb program (whose operation I can comprehend) with these formulations and the long proof, and see if it succeeds. That would convince me that the proof is indeed verified. However I found no traces of this "dumb" part of the system. And I understand that designing one may be hard. Because the language used by the system should be both human-friendly (so that a human can verify that the definitions are correct) and computer-friendly (so that a dumb program can parse it). And the definitions should be chosen carefully - I don't want to dig through a particular construction of the reals from rationals to make sure that this is indeed the reals that I know. Sorry for this philosophy, here is the question at last. Is there such a "dumb" system around? If yes, do formalization projects use it? If not, do they recognize the need and put the effort into developing it? Or do they have other means to make their systems trustable? UPDATE: Thank you all for interesting answers. Let me clarify that the main focus is interoperability with a human mathematician (who is not necessarily an expert in logic). It seems that this is close to interoperability between systems - if formal languages accepted by core checkers are indeed simple, then it should be easy to automatically translate between them. For example, suppose that one wants to stay within symbolic logic based on simple substitutions and axioms from some logic book. It seems easy to write down these logical axioms plus ZF axioms, basic properties (axioms) of the reals and the plane, some definitions from topology, and finally the statement of the Jordan curve theorem. If the syntax is reasonable, it should be easy to write a program verifying that another stream of bytes represents a deduction of the stated theorem from the listed axioms. Can systems like Mizar, Coq, etc, generate input for such a program? Can they produce proofs verifiable by cores of other systems? - Check Mizar. It was mentioned here several times. It is proof checker, input for it are formalized theorems and proofs, but language is near natural. So You may consider program which transforms proofs described in machine way You have mentioned above, into Mizar meta language, which is human readable and represents known mathematical facts, and it may be checked by Mizar system. – kakaz Mar 16 2010 at 21:14 Mizar is a "smart" program as far as I see. It weights 27 megabytes and what I am asking for is a small stand-alone checker. I would tolerate primitive syntax as long as a formulation of a typical theorem is comprehendable. – Sergei Ivanov Mar 16 2010 at 21:30 1 What makes you think humans proof checkers are less buggy than computer ones? :) At least in a computer program, you only have to get it right once: a brain can make any number of different mistakes each time it runs. – Max Jan 8 2011 at 22:39 2 @Max: First, I have not seen a single bug-free program yet. Second, there are many human checkers but essentially only one machine checker for each proof. – Sergei Ivanov Jan 9 2011 at 23:32 Certain kind of errors have to be accepted as part of human creativity- that is- overall - we all are humans and we are make mistakes but this is part of our living. It is worth of note that in certain situations errors and mistakes are not so important as statements which we say even with this errors in mind! Compare notes about David Hilbert works in famous G-C.Rota paper ams.org/notices/199701/comm-rota.pdf – kakaz Feb 14 2011 at 12:23 show 2 more comments ## 9 Answers Is there such a "dumb" system around? If yes, do formalization projects use it? If not, do they recognize the need and put the effort into developing it? Or do they have other means to make their systems trustable? This is called the "de Bruijn criterion" for a proof assistant -- just as you say, we want a simple proof checker, which should be independent of the other machinery. The theorem provers which most directly embody this methodology are those in the LCF tradition, such as Isabelle and HOL/Light. They actually work by generating proof objects via whatever program you care to write, and sending that to a small core to check. Systems based on dependent type theory (such as Coq) tend to have more complex logical cores (due to the much greater flexibility of the underlying logic), but even here a core typechecker can fit in a couple of thousand lines of code, which can easily be (and have been) understood and reimplemented. - 1 Do they address the "social" aspect of the question - that a human (not embedded deeply into the system) can check what is actually proved? That is, what is the formulation, definitions, and the axioms used - in a form readable by a human and verified by the simple core? How large are such data sets, actually? – Sergei Ivanov Mar 16 2010 at 23:38 3 You can understand what is proved, because the definitions and theorems have to be understandable for someone to write and prove them in the first place. The theorems are the same you'd find in a book, only written with quantifiers more explicit. The formal proof objects are (unfortunately) not understandable, because they contain too much low-level detail. Development sizes vary a lot. Surprisingly, fancy stuff (eg, constructive topology) is often shorter than basic mathematics (eg, arithmetic or group theory), since "basic" things rely on a much wider set of obvious mathematical facts. – Neel Krishnaswami Mar 17 2010 at 11:47 Are formulations and axioms sent to the core checker in exactly the same form as they typed into the higher level system? I was under impression that the higher level system has some fancy input language that simplifies writing. (Maybe I confused it with Mizar.) – Sergei Ivanov Mar 17 2010 at 17:38 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One simple suggestion no-one seems to have mentioned is to have the verifier prove itself correct. Obviously, this cannot really give any assurance that the verifier is correct, since if the verifier is incorrect its proofs are worthless. However, on heuristic grounds, this should give some confidence. The reason is that errors in assertions about the verifier's proofs (which will be checked) should be uncorrelated with the errors in the verifier itself. Of course, no physical process can be expected to prove anything with complete accuracy, so these kind of heuristics are the best one can hope for anyway. - This self-verification idea is called "reflection". Harrison has proved the consistency of the HOL Light logical core (or, strictly speaking, a subset of the logical core) using HOL Light. As you say, this may use an incorrectness to justify itself, but it definitely gives confidence. This consistency proof has been successfully ported to my HOL Zero system, removing these concerns (although I have not audited the ported proof). A belt-and-braces approach is best of all worlds, combining reflective proofs, LCF-style inference kernels and porting proofs to trusted proof checkers. – Mark Adams Jan 9 2011 at 14:40 I think the question you are asking - about how can we trust a formal proof - is a very important question. I have spent considerable effort developing software to specifically address this question. You touch on various things I have concentrated on. It is true that various systems prominent in the formalisation of mathematics - including the HOL systems (HOL4, ProofPower HOL, HOL Light), Isabelle and Coq - are built according to the "LCF style", which means that all deduction must go via a relatively small kernel of trusted source code (implementing the primitive inference rules), and that this greatly reduces the risks of producing unsound proofs on these systems. It would also not be an exaggeration to say that almost everyone working in formal proof is happy with this situation. Indeed, probably most (but not those working on the above systems) feel that resorting to the LCF style is overkill and an unnecessary drain on user execution time and on development effort. However, there are 3 major problems with this status quo: A) Most "LCF-style" systems do not implement the LCF-style kernel idea as purely as may be expected. Some systems have back doors to creating "proved theorems", such as importing the statements (but not the proofs) of previously proved results from disk, and trust that the user will not abuse this. Also, to reduce execution time, most systems implement various derivable inference rules as primitives, multiplying up the size of the trusted source code. Also, the kernels of most systems typically incorporate large amounts of supporting code (e.g. for organising theories) and are not particularly clearly implemented, and so are difficult to review. It should be noted that HOL Light does not suffer from any of these problems. B) The trusted aspects of an LCF-style system is NOT limited to the design/implementation of its LCF-style kernel. Like in all systems, it at least also includes the design of the concrete syntax and the implementation of the pretty printer, since, in practice, the user will only view results displayed in concrete syntax via the pretty printer. However, each system has problems with its concrete syntax and/or pretty printer that allows misleading information to be displayed to the user (e.g. by using irregular variables names, or names that are overloaded). I have found many ways of appearing to prove "false" in these systems! Also, depending on how the system is used, the parser is arguably also a trusted component. C) The importance of the human process of checking that the intended result has in fact been proved (I call this "proof auditing") is generally greatly underestimated, and in practice often not even carried out at all. As you rightly point out, the axioms and definitions used in a formal proof need to be carefully checked, as well as the statement of the theorem itself. I have known subtle mistakes in definitions to render real formal proofs on real projects completely invalid. I have developed an open source theorem prover called HOL Zero, that addresses issues A-C above and is designed for use in proof auditing and generally to be as trustworthy as possible. It has a watertight inference kernel, a well-designed concrete syntax and pretty printer, and the source code aims to be as clear and well-commented as possible. However, it should be noted that it does not have the advanced automatic and/or interactive proof facilities of the existing systems I mention above, and is not suited to developing large formal proofs. HOL Zero can be downloaded from here (it needs OCaml and a Unix-like operating system): http://www.proof-technologies.com/holzero The concept of checking one system using another is not only philosophically reassuring, but also of pressing need (due to the above issues A-C). As you say, what is needed is the ability to port proof objects between systems (the so called "de Bruijn criterion"). Strictly speaking, the de Bruijn criterion is as you state your requirement - the ability to capture a proof as an object (e.g. a text file) - rather than the LCF style (but let's not get too philosophical about the equivalence of these approaches). Anyway, there are some practical issues here: 1. The "dumb program" that you refer to needs to be surprisingly sophisticated, otherwise it loses much of its purpose. If it is just an LCF-style kernel, the data it outputs will be too slow to review for large projects. As you say, it needs to be human-friendly - a decent pretty printer is a practical necessity. Also, to make proof exporting (see next item) work in practice, it needs to work at least at a slightly higher level than the kernel, and so some supporting theory is required to be built. So yes, a dumb program is required, which should be as easy to understand as possible, but it is more challenging to build than a few lines of code. 2. Capturing proof objects in a suitably efficient way is a non-trivial exercise. The work that others mention above successfully port things like the base system of HOL Light, but are completely incapable of handling something the scale of Hale's HOL Light proof of the Jordan Curve Theorem, let alone Hale's Flyspeck Project (using HOL Light to check his non-formal proof of the Kepler Conjecture). 3. Some sort of neat and trivial correspondence of equivalent theory between systems is useful. This is better than importing language statements as huge expressions, in terms of some highly-complex embedding of one notation inside another, which would either greatly increase the complexity of the checking system or make its human usage difficult. HOL Zero is primarily aimed at the "dumb program" proof checker role. The idea is it will import and replay proofs that have been exported from other HOL systems. I have implemented a proof exporting mechanism which, unlike others' mechanisms, handles with ease large proofs such as Hale's Jordan Curve proof and the (almost-complete) Flyspeck Project. I am currently working on a proof importing mechanism for HOL Zero. (Note that a former proof importing mechanism I developed worked on an old version of HOL Zero and successfully ported Hale's Jordan Curve proof and Harrison's proof of the consistency of the HOL Light kernel.) - I don't see why pretty-printing is a problem. A standalone checker should just read its input files (human-readable definitions and statements and machine-generated proofs) and print "Yes" or "No" to the standard output. – Sergei Ivanov Jan 9 2011 at 1:12 @Sergei: What the "ugly" proof that the checker reads is correct, but then when a human mathematician goes to look at the "pretty" printed result, a false statement has been printed? – Daniel Litt Jan 9 2011 at 10:12 @Sergei: Yes, that is part of the job of the standalone checker, to print "yes" or "no". But there is a problem here: what exactly is it saying "yes" or "no" to? The exporter or importer may have made an error, and the unprinted theorem could establishing something else. In any case, the statement of the theorem and any definitions/axioms used need to be audited at some point, but the exporting system's printer cannot be trusted. It's far better for the checker to fully support the auditing process, regardless of the trustworthiness of the exporting system, the exporter or the importer. – Mark Adams Jan 9 2011 at 14:09 @Daniel: It should not print the statement. It should verify the statement (which is written by a human). – Sergei Ivanov Jan 9 2011 at 14:17 @Mark: If the definitions etc are to be audited by a human, they can as well be written down by a human. – Sergei Ivanov Jan 9 2011 at 14:32 show 4 more comments You are asking a lot of great questions: [Are computer proof assistants] any use for a mathematician like me? • On a very basic level, these systems prevent you from making mistakes. Subsequently, it spares one from the peer review process. One of the proofs of Jordan Curve theorem was carried out by Thomas Hales, as part of his attempt to automatically verify the correctness of the Kepler Conjecture. Hales is basically resorting to automated theorem proving because he feels that his proof, which involves establishing that 50,000 linear programming problems are infeasible (last time I checked) cannot possibly be verified by human peer review. • Most systems (Coq, Isabelle, HOL-Light) built in automation. This helps to inform the informal notion that mathematicians have for what constitutes a trivial, mechanical derivation and what is nontrivial - my rule of thumb is, if a computer can't automatically derive a certain, it's probably something I should illustrate explicitly. • Isabelle/HOL lets you use the automated theorem provers E, SPASS and Vampire to automatically prove propositions, employing the entirety of Isabelle/HOL's Library at their disposal. As Isabelle's library grows, this gains more and more power. But how to make sure that a machine verified the proof correctly? As Neel Krishnaswami mentioned above, one way one may be convinced is to learn how to program in pure, functional programming languages such as OCAML or SML and read the source code of systems like HOL-light or Isabelle. In both of these systems I have mentioned, there is a file thm.ml that contains the declarations of theorem constructors. These systems also have facilities for declaring new types. HOL-Light has, along with the basic rules and type constructors, three axioms: extensionality (Liebniz's Law), the axiom of infinity and the axiom of choice. Moreover, HOL-Light has been designed by it's author John Harrison to exhibit relative self-consistency proofs, just like in set theory. You may read about them here: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.97.2210&rep=rep1&type=pdf Here Harrison shows HOL-Light$-$Infinity has a model in HOL-Light, and HOL-Light has a model in HOL-Light+Grothendiek Cardinal. The definitions should be chosen carefully - I don't want to dig through a particular construction of the reals from rationals to make sure that this is indeed the reals that I know. I assume you are aware that all complete ordered fields are isomorphic to one another? Isabelle/HOL happens to construct them using Cauchy sequences, and has in its library there is another formulation using Dedekind Cuts. HOL-Light formulates the positive reals using the limiting slopes of functions $f:\mathbb{N}\to\mathbb{N}$ and then constructs the full reals using a semi-ring completion. Since algebraically all of these formulations are provably isomorphic, in practice the details of their construction are hidden from the users. Is there such a "dumb" system around? If yes, do formalization projects use it? If not, do they recognize the need and put the effort into developing it? Or do they have other means to make their systems trustable? They all require training to learn. However, for LCF style systems like Coq, Isabelle, and HOL-Light, its like programming languages: once you learn one, you've learned the principles necessary to understand all of them. Isabelle, and Mizar have special facilities for making proofs more "human". The system for Isabelle is called Isar. These systems aren't for dummies, sadly; one really needs a bit of special training to master them. On the other hand, they are much better than the "apply-style" proofs that are commonly employed, since they conform much better to human intuition. There's a systems for Coq called Caesar that in its infancy, but I expect it will ultimately make Coq much easier to use. If the syntax is reasonable, it should be easy to write a program verifying that another stream of bytes represents a deduction of the stated theorem from the listed axioms. Can systems like Mizar, Coq, etc, generate input for such a program? Can they produce proofs verifiable by cores of other systems? The short answer is YES, the long answer is YES, but it's complicated. Not all deductive systems have the same expressive power. Sets in ZFC are a special kind of object that one cannot construct in Higher Order Logic. To formulate set theory in Isabelle/HOL and HOL-Light use, one needs to postulate that a kind of object that makes true the axioms of Zermelo Fraenkel set theory exists (this is the route that Isabelle/HOLZF takes). On the other hand, one may embed HOL into ZFC without such difficulty - here is a paper where a translation system for Isabelle/HOL into ZFC is given using the theorem prover LF: http://kwarc.info/frabe/Research/RI_isabelle_10.pdf Chantal Keller has imported HOL-Light into Coq in her MSc thesis here: http://perso.ens-lyon.fr/chantal.keller/Documents-recherche/Stage09/itp10.pdf Importing back from Coq is difficult. Coq has a much more expressive type system than HOL. HOL-Light and Isabelle/HOL can be inter-translated, however: • Isabelle/HOL to HOL-Light: http://www.cs.cmu.edu/~seanmcl/papers/modules.pdf One cannot convert Mizar proofs to any other system because it is closed source, and does not have a small kernel one can use to produce proof code readable by other engines :( - You may ask for a Mizar source code, and You obtain it. There is one point: You have to write at least one Mizar "article" that is formalized proof before. Mizar is closed because it creator Andrzej Trybulec do not want any branching in the system. It is probably mistake but this is the state of affair. – kakaz Feb 14 2011 at 7:29 Many pointed out the essential: Provers like Coq and HOL have a very small core that checks the proofs. I want to add that there are other provers, the 'automatic' ones, which do indeed tend to be complicated. For example, most SMT solvers are like that. The route taken there is exactly the one mentioned in the question. Instead of trying to verify the proof 'finder', they modify it to generate proofs that can be checked by a very small proof checker. (See this article.) - The key point is the idea of the kernel of a theorem prover, as Adam mentioned. To put it another way the kernel is the smallest subset of the theorem prover's code base (and operating system and machine's physical realisation) that has the property that if the theorem prover proves a false theorem, then there is an error in the kernel that is responsible. Identifying the kernel is a matter of computer science, and errors in our grasp of computer science might lead us to misidentify the kernel. Beyond the confidence one has in a particular implementation's realisation on a particular machine, note: 1. The important theorem provers are multiply realised: many operating systems running on many machine architectures run the theorem proving code. So errors in the kernel that are due to the operating system or physical machinery must be ones that arise multiply, either by coincidence or by shared errors of design. 2. Theorem provers with the small core Neel describes (which includes all the important ones) can themselves be proven correct, giving a correctness proof that can be checked by other theorem provers. Less is done here than should have been done — I have heard that the Coq in Coq certification has been checked in Twelf, but have no reference — but in principle this observation means that errors in the kernel arising in the implementation of the theorem prover itself must also arise multiply among the theorem provers that verify the correctness proof. I recommend Geuvers (2009)'s Proof assistants: History, ideas, and future for an overview of these, among other other, issues. - Are these cores separate programs that make sense outside the whole system? Are proof objects text streams or objects of a high-level language in which the system is implemented in? – Sergei Ivanov Mar 30 2010 at 11:03 @Sergei: Q1: often. Q2: when it comes to verification, the proof objects we are concerned with are usually text files that the user prepares, with or without the help of a proof editing assistant. These typically are, like Hilbert-style proofs, a list of assertions that build up to the conclusion, but typically there are gaps that the prover checks by performing some nontrivial (but usually decidable) computation to check. – Charles Stewart Mar 30 2010 at 12:26 I meant the proof objects that are sent to the core. The core should not do anything nontrivial. – Sergei Ivanov Mar 30 2010 at 13:05 @Sergei: The core should be a simple, easy to understand algorithm, but that does not mean it needs to be computationally trivial or inexpensive. I am not sure, but I think that proof checking for both LF and Gallina (Coq's core logic) are EXPTIME problems. – Charles Stewart Mar 30 2010 at 15:53 Sergei, I asked Benjamin Gregoire (who is one of the main developers of Coq) and he said the 'kernel' is not as isolated as it could be, mainly for efficiency reasons. He also pointed out that someone recently implemented a standalone checker for Coq object files (.vo). See gforge.inria.fr/scm/viewvc.php/trunk/checker/… From logs the author seems to be lix.polytechnique.fr/~barras – rgrig Apr 1 2010 at 12:18 This answer is really an augmentation of Neel's (but too long to fit in a comment). The "de Bruijn criterion" was so-named by his colleague Henk Barendregt in the paper Autarkic Computations in Formal Proof (with Erik Barendsen) from 1997. However this is really implicit in de Bruijn's work, see for example The Mathematical Language AUTOMATH, its Usage, and Some of its Extensions from 1968. The AUTOMATH archive should really be required reading for anyone interested in computer-assisted deduction (as so many people are still re-discovering what de Bruijn already knew 40 years ago). Of course, the Barendregt/Barendsen article is interesting in its own right, as it advocates the use of computations as legitimate parts of proofs, something mathematicians do routinely, but proof systems based on the "de Bruijn criterion" and the LCF approach shy away from -- until recently. They dub this the Poincare Principle (which I certainly adhere to). Both Coq and Isabelle have started doing more and more computations. The purists in the theorem proving community used to look down on PVS and before that IMPS for doing too much'' computation! - See question #5 in the Coq FAQ: http://coq.inria.fr/faq "You have to trust that the implementation of the Coq kernel mirrors the theory behind Coq. The kernel is intentionally small to limit the risk of conceptual or accidental implementation bugs." - Arguably Norman Megill's Metamath http://metamath.org/ is not a mainstream proof verifier. But various third parties have written short programs to check his deductions, starting with Ralph Lieven's mmverify program which is 300 lines of Python; surely short enough for an interested bystander to survey. - 1 You have to add to those 300 lines the code of the python interpreter, and the code of the C compiler used to compile it, and the linker used to link it, and the libraries upon which it depends that are used at runtime, and the operating system the code runs on, and the microcode of the processors used, and the correctness of the actual chip! – Mariano Suárez-Alvarez Mar 16 2010 at 22:43 13 You forgot to mention the physical laws of the universe where the chip works. I don't think this is too big a problem as long as there are independent implementations of running on different hardware. Yes, and an alternative implementation of the universe, preferably open source. – Sergei Ivanov Mar 16 2010 at 23:16 4 Mariano and Sergei are just indicating two places to sit on the same issue. Sergei's question is "I am paranoid. Is everything definitely OK?" and Mariano's comment above is "You think you've got problems! I am very very paranoid and have realised I can never prove everything is OK". Mathematicians have to choose where to sit. – Kevin Buzzard Mar 17 2010 at 7:17 11 A related issue. Richard Pinch once told me that if you have a number p that you want to check is prime, and you start running a primality testing algorithm on it, then once the number is proved 99.999999999999% likely to be prime (e.g. because you've done 50 tests each one of which has a probability of 1/2 of failing if the number is composite) then before you do any more you might want to compare the probabilities you're dealing with, with the probability that little green men came down from Mars yesterday and replaced your computer with one where your code was replaced by buggy code. – Kevin Buzzard Mar 17 2010 at 7:21 2 mmverify's author is Raph Levien. – Charles Stewart Mar 30 2010 at 9:31 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481287002563477, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/114829/list
## Return to Answer 2 added 2009 characters in body The question also asks how to show that $\mathcal{H}\mu$ is defined almost everywhere. I don't have a reference for this, but the standard proof can be modified without too much difficulty. The maximal operator $\mathcal{H}^*\mu(x)\equiv\sup_{\epsilon > 0}\lvert\mathcal{H}_\epsilon\mu(x)\rvert$ is weak (1,1) continuous,\left\lvert\left\lbrace x\colon\mathcal{H}^*\mu(x) > \lambda\right\rbrace\right\rvert\le C\lVert\mu\rVert/\lambda,for all $\lambda > 0$ and a fixed constant $C$ (I'm using $\lvert\cdot\rvert$ to denote the Lebesgue measure). I'm working from the notes Interpolation, Maximal Operators, and the Hilbert Transform, by Michael Wong (Theorem 8.7). These notes look at the case where $\mu$ is absolutely continuous, but the proof carries across to the general case with no big changes. By weak continuity, if there exists measures $\mu_n$ with $\lVert\mu-\mu_n\rVert\to0$ such that $\mathcal{H}\mu_n$ all exist almost everywhere, then $\mathcal{H}\mu$ exists almost everywhere.If $\mu$ is absolutely continuous with differentiable density, then $\mathcal{H}\mu$ exists everywhere in the standard way. As the differentiable functions are dense in $L^1$, this extends to all absolutely continuous measures.Only the case for singular measures $\mu$ remain. So, there exists a measurable $S\subseteq\mathbb{R}$ with zero Lebesgue measure with $\mu(S^c)=0$. Then we can choose compact sets $K_n\subseteq S$ with $\mu(S\setminus K_n)\to0$. Note that the measures $\mu_n\equiv 1_{K_n}\cdot\mu$ are supported on the compact sets $K_n$ and $\lVert\mu-\mu_n\rVert\to0$. It follows that $\mathcal{H}\mu_n(x)$ is defined everywhere outside of $K_n$ (in fact, $\mathcal{H}_\epsilon\mu_n(x)$ is a constant function of $\epsilon$ for $\epsilon$ small enough that $B_\epsilon(x)\cap K_n=\emptyset$). So, $\mathcal{H}\mu_n$ is defined almost everywhere and, hence, so is $\mathcal{H}\mu$. 1 Showing that the two definitions agree almost everywhere is easy! Using the truncated transform $$\mathcal{H}_\epsilon\mu(x)=\frac1\pi\int_{\lvert y-x\rvert > \epsilon}\frac{d\mu(y)}{x-y}$$ then, by definition, $\mathcal{H}\mu(x)=\lim_{\epsilon\to0}\mathcal{H}_\epsilon\mu(x)$ for all $x$ at which the limit exists. Convolve the identity $$\Re\left(\frac{1}{x+ih}\right)=\frac{x}{x^2+h^2}=\int_0^11_{\left\lbrace\lvert x\rvert > h\sqrt{t/(1-t)}\right\rbrace}\frac1x\,dt$$ with $\frac1\pi d\mu$ to obtain, $$\Re\left(\mathcal{H}\mu\right)(x+ih)=\int_0^1\mathcal{H}_{h\sqrt{t/(1-t)}}\mu(x)\,dt.$$ The integrand on the right hand side tends to $\mathcal{H}\mu(x)$ as $h\to0$, whenever this is defined, so bounded convergence gives $\Re(\mathcal{H}\mu)(x+ih)\to\mathcal{H}\mu(x)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363825917243958, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/tagged/linear-algebra+algorithm
# Tagged Questions 0answers 53 views ### Fast calculation of commute distances on large graphs (i.e. fast computation of the pseudo-inverse of a large Laplacian / Kirchhoff matrix) I have a large, locally connected and undirected graph $G$ with $\approx 10^4$ vertices and $\approx 10^5$ to $\approx 10^6$ edges. Moreover I can bound the maximum vertex degree as $Q_{max}$. I ... 1answer 217 views ### Efficient method for inverting a block tridiagonal matrix Is there a better method to invert a large block tridiagonal Hermitian block matrix, other than treating it as a ordinary matrix? For example: ... 3answers 372 views ### Computing polynomial eigenvalues in Mathematica MATLAB offers a function polyeig for computing polynomial eigenvalues, which appear, for instance in quadratic eigenvalue problems (see here for some applications) such as: \begin{equation} ... 1answer 354 views ### Computing Slater determinants I need to compute Slater determinants. I'm wondering if I would benefit from assigning each of my functions to a variable prior to computation. I'm working with Slater determinants, but my question ... 1answer 339 views ### Higher order SVD Does anyone know how to do a higher order SVD in Mathematica ? A good reference seems to be here http://csmr.ca.sandia.gov/~tgkolda/pubs/bibtgkfiles/TensorReview.pdf but I don't understand their ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8747325539588928, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/78305/ample-divisors-on-projective-surfaces
## Ample divisors on projective surfaces ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Question: If $X$ is a projective surface and $U$ is an open affine subset of $X$, then is it true that $X \setminus U$ is the support of an (effective) ample divisor on $X$? Background: I was reading Goodman's paper "Affine open subsets of algebraic varieties and ample divisors" which considers this same question for general varieties. Here is what I understand so far: 1. If $\dim X = 1$, then the answer to the question is always affirmative. Indeed, if $X$ is a complete curve and $S$ is any finite set of points on $X$, then there is an effective ample Cartier divisor on $X$ which has support $S$ (this is Proposition 5 of the paper, and a straightforward application of the Nakai-Moishezon criterion of ampleness). 2. For $\dim X = 2$, Theorem 2 of the paper states that the answer is positive if each point of $X\setminus U$ is factorial (i.e. its local ring is a UFD). Actually he proves it only assuming that $X$ is complete (i.e. a priori not necessarily projective) and as a corollary he proves Zariski's theorem that if all the singularities of a complete surface $X$ are contained in an affine open subset, then $X$ is projective. 3. He presents two examples (of Hironaka and Zariski) where $X$ is a non-singular projective $3$-folds, but $X\setminus U$ is not the support of any ample divisor. 4. In general he proves (in Theorem 1) that if $X$ is complete then a Zariski open subset $U$ of $X$ is affine iff the complement of (the isomorphic image of $U$) in a blow-up $X'$ of $X$ along a closed subscheme $F$ not meeting $U$ is the support of an effective ample Cartier divisor on $X'$. 5. For $\dim X \geq 3$, he gives a criterion (in Theorem 3) for when the answer to the question is positive. As far as I can see, he does not mention anything about the status of the question (i.e. whether if there is a counter-example or not) for general projective surfaces. Therefore I ask it here. I would expect the answer to be negative, but can not think of any examples. For me particularly interesting would be the case when $X$ is normal. Edit: As the example of Jason Starr in the comment shows: The answer is negative even for normal surfaces (see my comments for an attempt of proof). I wonder what happens if $X$ is rational. In any event, I would gladly accept Jason's answer if he writes one. (And I would also greatly appreciate any answer/remark about the rational case.) - 5 It is not true, even for normal surfaces. Consider the projective cone over a smooth, plane cubic. Take a point on the plane cubic whose difference with any flex point is non-torsion in the group of linear equivalence classes of 0-divisors. The line of that point gives a counterexample. – Jason Starr Oct 17 2011 at 11:00 2 @Jason: Why is the complement in your example affine? – J.C. Ottem Oct 17 2011 at 11:59 2 In LNM #156, p.64, Hartshorne shows, following Goodman, that on a complete integral surface, the complement of the support of an effective divisor with no base points is affine. Thus a counterexample to your question arises from any such divisor which is not ample. – roy smith Oct 18 2011 at 3:34 2 apparently such examples do not exist for non singular surfaces. – roy smith Oct 18 2011 at 3:36 1 Ok, I think I can see the proof for Jason Starr's claim at least in the case that the base field is $\mathbb{C}$: 1. The line $L$ of Jason Starr's comment is not $\mathbb{Q}$-Cartier, so $L$ cannot be the support of an ample divisor. 2. The (minimal) desingularization $X'$ of $X$ replaces the vertex of the cone of $X$ by a copy $E$ of the original smooth plane cubic $C$ (it is a "cylinder" over $C$ near $E$). 3. To show $U := X\setminus L$ is affine, it suffices to show that there is an ample divisor on $X'$ with support $L' \cup E$, $L'$ being the strict transform of $L$. (contd. ...) – auniket Oct 18 2011 at 19:26 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256443977355957, "perplexity_flag": "head"}