url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://physics.stackexchange.com/questions/16943/what-are-the-final-particles-emitted-from-an-evaporating-black-hole
# What are the final particles emitted from an evaporating black hole? Hawking radiation predicts that black holes can slowly evaporate through the effective emission of a particle. This particle is a real particle, as in, it is not a black hole itself. I'll write this (a bit tongue-in-cheek) as follows, with $A$ being the emitted particle, and black hole prime being the black hole with slightly reduced mass. $$\mbox{black hole} \rightarrow A + \mbox{black hole}'$$ There is every reason to think this continues until the black hole stops being a black hole. So we are left with a radiated particle and something else. We don't know much about this process, but I think we can still limit it to a 2-product decay. $$\mbox{black hole} \rightarrow A + B$$ What could the something else, $B$, be? I don't know much about it, but I know that it is: 1. Not a black hole 2. A physically plausible particle It might be more realistic to ask what other conditions should we impose on B? I realize this is probably an unsolved problem. Also, won't $A$ be really highly energetic? How energetic? Actually, what is $A$ to begin with? - 1 This is definitely in the domain of string theory, and it will depend on the compactification details. If you look at model charged black holes you have a better chance of getting an objective answer. Black holes in string theory are continuous with matter--- there is no sharp separation--- the highly excited matter is a thermal black hole, the cold world-sheet is matter. They all decay according to recognizable black-hole like vertex-operator emissions. – Ron Maimon Nov 14 '11 at 9:54 ## 3 Answers Most of the time while the black hole is evaporating, the "A" particles will be photons (or other mass-less particles like gravitons). The reason for this is that the black hole emits radiation as if it were a black body and the temperature of the black body is inversely proportional to the black hole mass. For black holes with mass equal to the mass of the sun the temperature would be 60 nano-kelvins. So the energy of the particles emitted would have to be extremely low which is why they need to be mass-less particles like photons or gravitons. Now in our universe (today), if the mass of the black hole is greater than about the mass of the moon, it would actually gain more energy from the cosmic microwave background radiation than it would lose to Hawking radiation since the black body temperature of black hole with a mass greater than the moon would be lower than the CMB black body temperature of 2.7 K. The CMB temperature decreases over time so eventually all black holes will evaporate but it will take an extremely long time for this to happen. However as the black hole gets smaller and smaller, the black body temperature rises so that eventually the temperature would rise so high that massive particles could be emitted. For a given amount of energy being emitted any particle that has a rest mass below that energy could be the emitted particle. The reason for this is that the Hawking radiation is a result of the vacuum virtual pair creation in the vicinity of the event horizon. Since all particles participate in the vacuum virtual pair creation, any particle that is compatible with that black body energy spectrum will be emitted. When the mass of the black hole approaches zero, the black body temperature will approach infinity so the evaporation will be faster and faster with more and more energetic particles. So the last two particles that would be emitted would be any 2 particles compatible with the remaining mass of the micro-black hole. So if, for example, a micro-black hole could be created at the LHC it would evaporate very rapidly into an isotropic spray of particles of all types compatible with the energy of the black hole. That would be the event signature that would be found by the LHC detectors. - There is no exact model of the endpoint of black hole evaporation. However, I am persuaded that Samir Mathur's "fuzzball" model of black holes is the right one. In a fuzzball, you don't have a singularity or even an event horizon, because the fuzzball physically extends to where the horizon would be found in classical general relativity. When Mathur constructs individual bound states corresponding to the extremal black holes found in string theory, they have this feature. So black holes are fuzzballs, and fuzzballs are bound states of strings and branes, and fuzzball evaporation results from these bound objects occasionally escaping, not from pair production at a nonexistent event horizon. The endpoint of fuzzball evaporation will be a fuzzball so small it is just an ordinary bound state. At this point FrankH's answer will apply: the black hole will have disintegrated into a spray of ordinary particles. But my point is that we apparently need the fuzzball description of black holes, which is still work in progress, in order to really understand what's going on. - The Hawking radiation from a black hole and the Unruh effect where an accelerating observer detects black body radiation are intimately related. The "fuzzball" idea doesn't apply to Unruh radiation, does it? In my opinion multiple models can explain the same physics and virtual particles at an even horizon or fuzzballs may just be two equivalent models. – FrankH Mar 5 '12 at 21:28 Our world is a quantum world so everything, including any stage of the Hawking evaporation, may only be predicted probabilistically. At the end of its life, a tiny black hole has a mass comparable to the Planck mass and such a tiny black hole becomes qualitatively the same thing as just another heavy unstable species of an elementary particle. A question is whether or not you know the precise microstate of such a black hole. If you know it, you may predict the probabilities of different final products accurately from a well-defined compactification of string/M-theory you consider (without string/M-theory, you will clearly be able to make no precise predictions of the quantum gravity phenomena, and this is a textbook example of one). If you don't know the exact microstate is, it is still true that roughly speaking, the small black hole emits a thermal radiation. However, at the very end of the life of the black hole, its temperature goes up a lot. At the very end, the temperature is close to the Planck temperature (the highest possible temperature that may marginally be talked about in physics) so the decay products may include (with a high probability) very heavy particle species, too. Right before it disappears, a black hole may surely produce a pair of top quarks or even heavier particles. There's still a nonzero probability that it will decay to two photons or anything else that doesn't violate conservation laws. Actually, the probability is nonzero that a black hole emits another, smaller black hole. It's just very unlikely: such a process is essentially suppressed by $\exp(-S)$ where $S$ is the entropy of the emitted black hole. For macroscopic black holes, such a factor is zero for all practical purposes. However, if you want to emit black holes that are slightly larger than the minimal (Planckian) black hole, the factor isn't hopelessly tiny and the emission of a small black hole is a possibility. Again, any black hole microstate may always be interpreted as yet another species of an elementary particle. Large black holes have a high entropy and the description in terms of "exponentially many new particle species" becomes contrived. However, for the smallest (Planckian) black holes, the description in terms of new particle species becomes a condition for any accurate description of the black hole's behavior. - To be completely clear, all black holes are over the plank mass and all elementary particles are under the plank mass? – AlanSE Nov 14 '11 at 12:51 You can say it this way. But again, this is a convention. For particles of mass comparable to the Planck mass, both particle physics of the kind we know from light particles as well as effects of gravity we know from black holes are important. There is no qualitative sharp separation of these objects to elementary particles and black hole microstates. The transition is continuous. The transitional regime near the Planck mass is the only regime in which a detailed model of quantum gravity is really needed to make predictions: lighter objects are just QFT and heavier ones are given by GR. – Luboš Motl Nov 14 '11 at 13:17 "There's still a nonzero probability that it will decay to two photons or anything else that doesn't violate conservation laws." It's important to note that these conservation laws don't include all the conservation laws that are normally believed to hold in particle physics. For instance, baryon number is not conserved. There is a nice discussion of this in Wald, p. 413. – Ben Crowell Nov 14 '11 at 16:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9386398792266846, "perplexity_flag": "head"}
http://mathoverflow.net/questions/115706/how-to-solve-a-system-of-linear-equations-without-storing-the-matrix/115716
## How to solve a system of linear equations without storing the matrix? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have a procedurally defined Hermitian matrix $M$, i.e. I can get any matrix element by calling a black box function (e.g. a library function), and a vector $Y$. And I have to solve a system of linear equations: $M\cdot X=Y$. But $Y$ is such that having $n$ elements, it takes about half of available RAM, another half would be for $X$, so if I try to store $M$, it'll take $n^2$ space, i.e. even if I double the RAM space, this will still place $n-2$ matrix columns/rows into swap. At the same time, all the algorithms which I've found need saving large amounts ($\geq n^2$) of data while solving the system. So, the question: are there any algorithms to solve systems of linear equations which don't require me to store the matrix, and still are fast enough (maybe not $O(n^3)$, but at least not much slower)? - In principle, determinants, and therefore solutions of non-singular linear systems, are computable in $\mathrm{NC}^2$, and therefore in space $O(\log^2n)$ (not including the input and output). However, I rather doubt such algorithms would be practical. In particular, they need time $n^{O(\log n)}$, which most likely wouldn’t count as “not much slower”. – Emil Jeřábek Dec 7 at 12:27 3 If you don't need the exact solution but some approximation, you can use an iterative method like Gauss-Seidel. – Brendan McKay Dec 7 at 12:40 I assume reading parts of the matrix, modify them, and write back is out of the question, since this is essentially as having a huge swap...? – Per Alexandersson Dec 7 at 15:16 @Per Alexandersson Of course, this is not an option. Even if HDD/SSD speed were comparable with RAM speed, it'd take petabytes of space for about a gig of RAM. – 10110111 Dec 7 at 16:03 ## 2 Answers This looks like a situation where the Kaczmarz method could work. What you do to maintain an approximate solution and then project cyclically onto the hyperplanes which are given by the $k$-th equation. More precisely: If you have the $m$-the iterate $X^m$ and use the $k$-th equation, then you have the next iterate $$X^{m+1} = X^M + \frac{Y_k - a_k^T\cdot X^m}{\|a_k\|^2}a_i$$ where $a_k$ is the $k$-th row of $A$ and $Y_k$ is the $k$-the entry of $Y$. Hence, you only need one row of $A$, one entry of $Y$ and the current iterate $X^m$ to perform one iteration, i.e. $2n+1$ space. Also the iteration complexity is very low (it's $\mathcal{O}(n)$) but you usually need a lot of iterations. This methods is widely used in discrete tomography and is also an instance of the "Projection onto convex sets" method. Recently it has been shown by Strohmer and Vershynin that a randomized version of this method has favorable convergence properties (when you pick each column with a probability proportional to its norm). Also "block iterative" versions work, i.e. you take a hyperplane of higher codimension to project on. So, if you have some memory left, you could also take some more rows of $A$ at once... See, e.g. here. - Well, indeed converges rather slowly. Though, I've not tried randomized version yet.<br /> Do I understand correctly that I can use this method without requirements of matrix being real or diagonally dominant as is for Gauss-Seidel? – 10110111 Dec 9 at 11:23 Granted, convergence can be slow (in terms of iteration count and computational effort - its advantage is low memory). You don't need any requirements for the matrix (for the complex case adjust the projection accordingly). In fact you could also apply the method to rectangular systems. It converges to some solution for the underdetermined case (and the minimum-norm solution if initialized with zero). In the overdetermined case you need to stop at some point as you'll see that the residual $\|AX-Y\|$ is not decreasing anymore. – Dirk Dec 9 at 12:21 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Being able to get elements of the matrix isn't very useful (particularly if you don't know where the nonzero elements of the matrix are without checking.) Iterative methods can be useful if you have the ability compute matrix vector products $Mx$. You haven't said whether this is possible. It seems quite likely that there's some special structure to your particular problem that would make it possible to simplify this computation. You haven't told us anything about where the system of equations comes from- perhaps if you explained this in some detail we could suggest ways to proceed. - 3 Being able to get elements of the matrix is useful to compute matrix vector products, for instance. Am I missing something? – Emil Jeřábek Dec 7 at 14:48 As already said above, being able to get matrix elements is enough to compute matrix vector products. The whole matrix consists mostly of non-zero elements, so it's not useful to search for any zero ones. As for structure, the matrix is an effective Hamiltonian, so what I can say for sure is only that it's Hermitian. As for origin of the system of equations, I'm trying to use inverse iteration with shift to find specific eigenvectors of this matrix, so Y is current approximation of eigenvector, and X is next step approximation. – 10110111 Dec 7 at 15:54 Let me clarify what I meant here- "being able to get an arbitrary element M(i,j) at little cost" isn't very useful. If you don't know where the nonzero elements are in the matrix, then you have to check every single one to find the nonzeros. If you do happen to know where the nonzero elements are, and you can compute them quickly, then you could use this as a way to do matrix vector multiplications in an iterative method. – Brian Borchers Dec 16 at 14:22 If the matrix isn't sparse, and the cost of getting individual matrix entries is large compared to the cost of accessing an element of a matrix stored in conventional dense matrix form, then iterative methods are going to be horribly slow in practice. – Brian Borchers Dec 16 at 14:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475875496864319, "perplexity_flag": "head"}
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.bbms/1337864268
### Petrov-Galerkin method with cubic B-splines for solving the MEW equation Turabi Geyikli and S. Battal Gazi Karakoç Source: Bull. Belg. Math. Soc. Simon Stevin Volume 19, Number 2 (2012), 215-227. #### Abstract In the present paper, we introduce a numerical solution algorithm based on a Petrov-Galerkin method in which the element shape functions are cubic B-splines and the weight functions quadratic B-splines . The motion of a single solitary wave and interaction of two solitary waves are studied. Accuracy and efficiency of the proposed method are discussed by computing the numerical conserved laws and $L_{2}$ , $L_{\infty }$ error norms. The obtained results show that the present method is a remarkably successful numerical technique for solving the modified equal width wave(MEW) equation. A linear stability analysis of the scheme shows that it is unconditionally stable. First Page: Primary Subjects: 65N30, 65D07, 76B25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8935768604278564, "perplexity_flag": "head"}
http://www.reference.com/browse/wiki/Boyer-Lindquist_coordinates
Definitions # Boyer-Lindquist coordinates A generalization of the coordinates used for the metric of a Schwarzschild black hole that can be used to express the metric of a Kerr black hole. The coordinate transformation from Boyer-Lindquist coordinates r, $theta$, $phi$ to cartesian coordinates x, y, z is given by $\left\{x\right\} = sqrt \left\{r^2 + a^2\right\} sinthetacosphi$ $\left\{y\right\} = sqrt \left\{r^2 + a^2\right\} sinthetasinphi$ $\left\{z\right\} = r costheta quad$ For a physical interpretation of the parameter a, see the Kerr Metric. ## References • Boyer, R. H. and Lindquist, R. W. Maximal Analytic Extension of the Kerr Metric. J. Math. Phys. 8, 265-281, 1967. • Shapiro, S. L. and Teukolsky, S. A. Black Holes, White Dwarfs, and Neutron Stars: The Physics of Compact Objects. New York: Wiley, p. 357, 1983.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.5358166694641113, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/70952/smooth-real-analytic-interpolation-of-monotonic-sequence/108611
smooth/real analytic interpolation of monotonic sequence Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $x_n$ be a increasing sequence of negative real numbers that converge to $0$. Let $f$ be a function defined on $x_n$ such that $f(x_n)$ is a increasing sequence of negative numbers that converge to zero. Can I find a $C^\infty$ or real-analytic function $g$ defined on a ball around zero such that $g(x_n)=f(x_n)$ for all $n$ larger than some positive integer? - 5 No. Take $x_n=-\frac 1{n^2}$ and $f(x_n)=-\frac 1n$. – fedja Jul 22 2011 at 3:06 3 Answers As fedja says in the comments, the answer is "No" in general. However, there is a known condition on when it is possible to interpolate a sequence by a smooth function. This can be found in Section 2.8 of Chapter I of Kriegl and Michor's The Convenient Setting for Global Analysis (though it is not said where the concept originates from). The set up is as follows: we have a locally convex topological vector space $E$, and a sequence $(x_n)$ in $E$. We say that $(x_n)$ converges fast or falls fast to $x \in E$ if, for each $k \in \mathbb{N}$, the sequence $n^k(x_n - x)$ is bounded. Then the result is: Special Curve Lemma: Let $(x_n)$ be a sequence which converges fast to $x$ in $E$. Then the infinite polygon through the $(x_n)$ can be parameterised as a smooth curve $c \colon \mathbb{R} \to E$ such that $c(1/n) = x_n$ and $c(0) = x$. Although this is only an "if", it shouldn't be hard to check whether the "only if" holds or not. For a quick proof of the above, see Kriegl and Michor's book (p16 in the printed version; it's free online via the above link). This uses the notion of smoothness in LCTVS's that Kriegl and Michor describe (in great detail) in their book. For finite dimensional vector spaces, it is the same as the usual one. - 2 If you forgive an off topic humorous aside, I notice from the book's website that the FREE pdf download apparently masses 4.68 milligrams. I wonder how they worked that out? – Harald Hanche-Olsen Jul 22 2011 at 10:25 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let me remark on the $C^m$ version of the question for $m \geq 1$. A classic result of H. Whitney from Differentiable functions defined in closed sets I, Transactions A.M.S. 36 (1934), 369–387 reads as follows: Suppose that $f : \lbrace x_1, x_2 ,\cdots\rbrace \rightarrow \mathbb{R}$ is given, with $x_k$ some convergent increasing sequence. Suppose that the $k$'th divided difference quotient based at $x_n$, defined inductively by $$\Delta_{n,n+k+1} f : = \frac{\Delta_{n,n+k} f - \Delta_{n+1,n+k+1} f}{x_{n} - x_{n+k+1}}$$ and $$\Delta_{n,n} f = f(x_n)$$ satisfies $|\Delta_{n,n+k+1} f| \leq \Lambda$ uniformly in $n \geq 1$ and $0 \leq k \leq m$ for some $\Lambda < \infty$. In addition, suppose that $$\Delta_{n,n+k+1} f \rightarrow c_k$$ as $n \rightarrow \infty$ for each $0 \leq k \leq m$. Then there exists a $C^m$ function $g: \mathbb{R} \rightarrow \mathbb{R}$ with $g(x_n) = f(x_n)$ for all $n \geq 1$ and $$\|g\|_{C^m} \leq C(m) \Lambda.$$ In Whitney's original paper, this theorem was proven in greater generality for $f : E \rightarrow \mathbb{R}$ and $E \subset \mathbb{R}$ arbitrary and closed instead of the special case where $E$ consists of a sequence with a single limit point. I believe that a similar constructive characterization is known for the $C^\infty$ case under the assumption that $\lbrace x_k\rbrace$ is quickly decreasing in the sense that $P(k) |x_k| \rightarrow 0$ for every polynomial $P(k)$. Even for the example of $x_n=-1/n$ I do not know the answer for a general $f$. For quickly decreasing sequences $\lbrace x_k \rbrace$ a recent result used in work of Charles Fefferman and Fulvio Ricci gives a characterization for which functions $f: \lbrace x_k \rbrace \rightarrow \mathbb{R}$ can be extended into $C^\infty(\mathbb{R})$. Unfortunately, I learned of this at a recent conference and I cannot locate the preprint. I am not sure of the relationship between their result and Andrew's answer, since polygonal curves are never graphs of $C^\infty$ functions (or even $C^1$ functions). - The following paper answers this question for functions of a real variable under weaker assumptions that the Special Curve Lemma mentioned above: MR1245559 (94k:26024) Frölicher, Alfred ; Kriegl, Andreas . Differentiable extensions of functions. Differential Geom. Appl. 3 (1993), no. 1, 71--90. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292823076248169, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/4023/using-pairings-to-verify-an-extended-euclidean-relation-without-leaking-the-valu?answertab=active
# Using pairings to verify an extended euclidean relation without leaking the values? Let $P_i(x)$ be polynomials $i=1,...,n$, $s$ some value, and $g$ a generator of a group $G$ where the discrete logarithm is hard. Assume a prover wants to convince a verifier having access to the values $g^{P_i(s)}$ that it knows polynomials $q_i(x)$ such that the following equation holds: $q_1(s)*P_1(s)+q_2(s)*P_2(s)+\ldots+q_n(s)*P_n(s) = 1.$ The prover thus sends the verifier the values $g^{q_i(s)}$ and uses bilinear maps to verify the correctness of the answer. Can the following equation tell me whether it has correctly computed the $q_i(s)$: $$e(g^{q_i(s)},g^{P_i(s)}) = e (g,g)$$ where $e$ is a bilinear map $G\times G\rightarrow G_1$. So basically I want my bilinear map to verify on the exponents while hiding them. Is the second part of the equation correct? Or should it be 1, or $e(g,g)^{ord(G)}=1$? - There seems to be a typo or grammatical error in the question, which has me lost. "...for some secret s that the prover which is...": is there some word or sequence of words missing in the middle of this? – D.W. Oct 19 '12 at 5:49 Also, I don't understand what information is known to which parties. Who knows $s$? Who knows $S$? Does anyone know the polynomials $P_i(x)$ and $q_i(x)$? Who knows the value of $P_i(s)$ and $q_i(s)$ (these polynomials evaluated at $s$)? And, your definition of $P_i(s)$ does not seem to have any dependence on $i$. Did you mean to use $S_i$ instead of $S$ in that equation? If so, who knows the $S_i$'s? – D.W. Oct 19 '12 at 5:51 The exact definition of the $P_i$, $q_i$, and $s$ are not really relevant to the question. But in case you really want to know the dirty details, look at the dirty details of set intersection queries... – bob Oct 19 '12 at 16:43 ## 1 Answer Well, assuming the equation holds, $\Pi_{i=1}^n e(g^{q_i(s)},g^{P_i(s)}) = e (g,g)$ must also hold, due to the bilinearity of $e$. (Conversely, the equation only holds mod |G_1|.) To see why, recall that the fact that the mapping $e$ is bilinear translates into $e(g^a,h^b)=e(g,h)^{a*b}$ for all elements $g$ and $h$ in $G$ and for all integers $a$ and $b$. Thus, we always have $e(g^{q_i(s)},g^{P_i(s)})=e(g,g)^{q_i(s)*P_i(s)}$ and by multiplying all these equalities together we get that $$\Pi_{i=1}^n e(g^{q_i(s)},g^{P_i(s)}) =e(g,g)^{q_1(s)*P_1(s)+\cdots+q_n(s)*P_n(s)}.$$ Now what you want to check is if $q_1(s)*P_1(s)+\cdots+q_n(s)*P_n(s)=1$: if it is true, we can replace the expression in the left-hand side of this equation in the pairing equation above by 1 and thus get $$\Pi_{i=1}^n e(g^{q_i(s)},g^{P_i(s)}) = e (g,g)$$ as stated in the beginning. For $e(g,g)$ is a group element of $G_1$, it must hold that $e(g,g)^{|G_1|}=1$, as for any other element of $G_1$. For the actual value of $e(g,g)$ itself, we can only say it is an element of $G_1$ (by definition of $e$). It can be the neutral element of $G_1$ in which case $e(g^a,g^b)=1$ for all $a$ and $b$ and thus does not provide you with a way to check the extended euclidean equality, since you'll basically get $1=1$ for all possible values of $q_i$ and $P_i$. - so $e(g,g)=1$. I am always confused whether or not $e(g,g)=e(g,g)^{ord(G)}=1$ – curious Oct 11 '12 at 9:38 So to conclude . In all bilinear mappings from $G$ to $G_1$ where $g$ the generator of $G$ $e(g,g)=1$Right? – curious Oct 11 '12 at 11:24 No: $e(g,g)$ is just an element of the group $G_1$ since $e$ maps pairs of elements from $G$ to elements of $G_1$. Now, if $e(g,g)=1$, that is, if $e(g,g)$ is the neutral element of $G_1$, then you can basically do nothing with the pairing $e$ using the generator $g$, since then $e(g^a,g^b)=e(g,g)^{a*b}=1^{a*b}=1$. So, to summarize, it can be that $e(g,g)$ for a specific $g$ and $e$, but this case is devoid of interest to build upon for cryptographic purposes (degenerate case). – bob Oct 11 '12 at 11:54 So can i be sure that $e(g^{q_i(s)},g^{P_i(s)}) = e (g,g)$ holds? – curious Oct 11 '12 at 12:57 2 No. As I said in the answer, assuming the output of the extended euclidean algorithm is correct, $\Pi_{i=1}^n e(g^{q_i(s)},g^{P_i(s)}) = e (g,g)$ must also hold. Please note the $\Pi$ in there: it means that you have to multiply with the group law in $G_1$ all of the $e(g^{q_i(s)},g^{P_i(s)})$ together to get the group element $e(g,g)$. – bob Oct 11 '12 at 19:01 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 80, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293544888496399, "perplexity_flag": "head"}
http://catalog.flatworldknowledge.com/bookhub/reader/21?e=rittenberg-ch06_s02
Please wait while we create your MIYO... # Principles of Microeconomics, v. 1.0 by Libby Rittenberg and Timothy Tregarthen Study Aids: Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards). Study Pass: Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes. Highlighting and Taking Notes: If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination. Printing: If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections. Search: To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result). View Full Student FAQs ## 6.1 The Logic of Maximizing Behavior ### Learning Objectives 1. Explain the maximization assumption that economists make in explaining the behavior of consumers and firms. 2. Explain and illustrate the concepts of marginal benefit and marginal cost and apply them to understanding the marginal decision rule. To say that individuals maximize is to say that they pick some objective and then seek to maximize its value. A sprinter might want to maximize his or her speed; a politician might want to maximize the probability that he or she will win the next election. Economists pay special attention to two groups of maximizers: consumers and firms. We assume that consumers seek to maximize utility and that firms seek to maximize economic profitThe difference between total revenue and total cost., which is the difference between total revenue and total cost. The costs involved in this concept of economic profit are computed in the economic sense—as the opportunity costs, or value of the best opportunity forgone. The assumption of maximizing behavior lies at the heart of economic analysis. As we explore its implications, however, we must keep in mind the distinction between models and the real world. Our model assumes that individuals make choices in a way that achieves a maximum value for some clearly defined objective. In using such a model, economists do not assume that people actually go through the calculations we will describe. What economists do argue is that people’s behavior is broadly consistent with such a model. People may not consciously seek to maximize anything, but they behave as though they do. ## The Analysis of Maximizing Behavior The activities of consumers and firms have benefits, and they also have opportunity costs. We assume that given these benefits and costs, consumers and firms will make choices that maximize the net benefitThe total benefit of an activity minus its opportunity cost. of each activity—the total benefit of the activity minus its opportunity cost. The specific measures of benefit and cost vary with the kind of choice being made. In the case of a firm’s choices in production, for example, the total benefit of production is the revenue a firm receives from selling the product; the total cost is the opportunity cost the firm incurs by producing it. The net benefit is thus total revenue minus total opportunity cost, or economic profit. Economists maintain that in order to maximize net benefit, consumers and firms evaluate each activity at the margin—they consider the additional benefit and the additional cost of another unit of the activity. Should you “supersize” your order at McDonald’s? Will the additional beverage and the additional french fries be worth the extra cost? Should a firm hire one more worker? Will the benefits to the firm of hiring this worker be worth the additional cost of hiring him or her? The marginal benefitThe amount by which an additional unit of an activity increases its total benefit. is the amount by which an additional unit of an activity increases its total benefit. It is the amount by which the extra french fries increase your satisfaction, or the extra revenue the firm expects to bring in by hiring another worker. The marginal costThe amount by which an additional unit of an activity increases its total cost. is the amount by which an additional unit of an activity increases its total cost. You will pay more to supersize your McDonald’s order; the firm’s labor costs will rise when it hires another worker. To determine the quantity of any activity that will maximize its net benefit, we apply the marginal decision ruleIf the marginal benefit of an additional unit of an activity exceeds the marginal cost, the quantity of the activity should be increased. If the marginal benefit is less than the marginal cost, the quantity should be reduced.: If the marginal benefit of an additional unit of an activity exceeds the marginal cost, the quantity of the activity should be increased. If the marginal benefit is less than the marginal cost, the quantity should be reduced. Net benefit is maximized at the point at which marginal benefit equals marginal cost. The marginal decision rule is at the heart of the economic way of thinking. The rule basically says this: If the additional benefit of one more unit exceeds the extra cost, do it; if not, do not. This simple logic gives us a powerful tool for the analysis of choice. Perhaps more than any other rule in economic analysis, the marginal decision rule typifies the way in which economists analyze problems. We shall apply it in every chapter that follows in the microeconomics portion of this text. Maximizing choices must be made within the parameters imposed by some constraintA boundary that limits the range of choices that can be made., which is a boundary that limits the range of choices that can be made. We assume that a consumer seeks the greatest satisfaction possible within the limits of his or her income or budget. A firm cannot produce beyond the limits of its production capacity at a point in time. The marginal decision rule forms the foundation for the structure economists use to analyze all choices. At first glance, it may seem that a consumer seeking satisfaction from, say, pizza has little in common with an entrepreneur seeking profit from the production of custom-designed semiconductors. But maximizing choices always follow the marginal decision rule—and that rule holds regardless of what is being maximized or who is doing the maximizing. To see how the logic of maximizing choices works, we will examine a specific problem. We will then extend that problem to the general analysis of maximizing choices. ## A Problem in Maximization Suppose a college student, Laurie Phan, faces two midterms tomorrow, one in economics and another in accounting. She has already decided to spend 5 hours studying for the two examinations. This decision imposes a constraint on the problem. Suppose that Ms. Phan’s goal is to allocate her 5 hours of study so that she increases her total score for the two exams by as much as possible. Ms. Phan expects the relationship between the time she spends studying for the economics exam and the total gain in her score to be as given by the second row of the table in Panel (a) of Figure 6.1 "The Benefits of Studying Economics". We interpret the expected total gain in her score as the total benefit of study. She expects that 1 hour of study will raise her score by 18 points; 2 hours will raise it by 32 points, and so on. These values are plotted in Panel (b). Notice that the total benefit curve rises, but by smaller and smaller amounts, as she studies more and more. The slope of the curve, which in this case tells us the rate at which her expected score rises with increased study time, falls as we travel up and to the right along the curve. Figure 6.1 The Benefits of Studying Economics The table in Panel (a) shows the total benefit and marginal benefit of the time Laurie Phan spends studying for her economics exam. Panel (b) shows the total benefit curve. Panel (c) shows the marginal benefit curve, which is given by the slope of the total benefit curve in Panel (b). Now look at the third row in the table in Panel (a). It tells us the amount by which each additional hour of study increases her expected score; it gives the marginal benefit of studying for the economics exam. Marginal benefit equals the amount by which total benefit rises with each additional hour of study. Because these marginal benefits are given by the changes in total benefits from additional hours of study, they equal the slope of the total benefit curve. We see this in the relationship between Panels (b) and (c) of Figure 6.1 "The Benefits of Studying Economics". The decreasing slope of the total benefit curve in Panel (b) gives us the downward-sloping marginal benefit curve in Panel (c). The marginal benefit curve tells us what happens when we pass from one point to another on the total benefit curve, so we have plotted marginal benefits at the midpoints of the hourly intervals in Panel (c). For example, the total benefit curve in Panel (b) tells us that, when Ms. Phan increases her time studying for the economics exam from 2 hours to 3 hours, her total benefit rises from 32 points to 42 points. The increase of 10 points is the marginal benefit of increasing study time for the economics exam from 2 hours to 3 hours. We mark the point for a marginal benefit of 10 points midway between 2 and 3 hours. Because marginal values tell us what happens as we pass from one quantity to the next, we shall always plot them at the midpoints of intervals of the variable on the horizontal axis. We can perform the same kind of analysis to obtain the marginal benefit curve for studying for the accounting exam. Figure 6.2 "The Marginal Benefits of Studying Accounting" presents this curve. Like the marginal benefit curve for studying economics, it slopes downward. Once again, we have plotted marginal values at the midpoints of the intervals. Increasing study time in accounting from 0 to 1 hour increases Ms. Phan’s expected accounting score by 14 points. Figure 6.2 The Marginal Benefits of Studying Accounting The marginal benefit Laurie Phan expects from studying for her accounting exam is shown by the marginal benefit curve. The first hour of study increases her expected score by 14 points, the second hour by 10 points, the third by 6 points, and so on. Ms. Phan’s marginal benefit curves for studying typify a general phenomenon in economics. Marginal benefit curves for virtually all activities, including the activities of consumers and of firms, slope downward. Think about your own experience with studying. On a given day, the first hour spent studying a certain subject probably generates a greater marginal benefit than the second, and the second hour probably generates a greater marginal benefit than the third. You may reach a point at which an extra hour of study is unlikely to yield any benefit at all. Of course, our example of Laurie Phan’s expected exam scores is a highly stylized one. One could hardly expect a student to have a precise set of numbers to guide him or her in allocating study time. But it is certainly the case that students have a rough idea of the likely payoff of study time in different subjects. If you were faced with exams in two subjects, it is likely that you would set aside a certain amount of study time, just as Ms. Phan did in our example. And it is likely that your own experience would serve as a guide in determining how to allocate that time. Economists do not assume that people have numerical scales in their heads with which to draw marginal benefit and marginal cost curves. They merely assume that people act as if they did. The nature of marginal benefits can change with different applications. For a restaurant, the marginal benefit of serving one more meal can be defined as the revenue that meal produces. For a consumer, the marginal benefit of one more slice of pizza can be considered in terms of the additional satisfaction the pizza will create. But whatever the nature of the benefit, marginal benefits generally fall as quantities increase. Ms. Phan’s falling marginal benefit from hours spent studying accounting has special significance for our analysis of her choice concerning how many hours to devote to economics. In our problem, she had decided to devote 5 hours to studying the two subjects. That means that the opportunity cost of an hour spent studying economics equals the benefit she would have gotten spending that hour studying accounting. Suppose, for example, that she were to consider spending all 5 hours studying accounting. The marginal benefit curve for studying for her accounting exam tells us that she expects that the fifth hour will add nothing to her score. Shifting that hour to economics would cost nothing. We can say that the marginal cost of the first hour spent studying economics is zero. We obtained this value from the marginal benefit curve for studying accounting in Figure 6.2 "The Marginal Benefits of Studying Accounting". Similarly, we can find the marginal cost of the second hour studying economics. That requires giving up the fourth hour spent on accounting. Figure 6.2 "The Marginal Benefits of Studying Accounting" tells us that the marginal benefit of that hour equals 2—that is the marginal cost of spending the second hour studying economics. Figure 6.3 "The Marginal Benefits and Marginal Costs of Studying Economics" shows the marginal cost curve of studying economics. We see that at first, time devoted to studying economics has a low marginal cost. As time spent studying economics increases, however, it requires her to give up study time in accounting that she expects will be more and more productive. The marginal cost curve for studying economics can thus be derived from the marginal benefit curve for studying accounting. Figure 6.3 "The Marginal Benefits and Marginal Costs of Studying Economics" also shows the marginal benefit curve for studying economics that we derived in Panel (b) of Figure 6.1 "The Benefits of Studying Economics". Figure 6.3 The Marginal Benefits and Marginal Costs of Studying Economics The marginal benefit curve from Panel (c) of Figure 6.1 "The Benefits of Studying Economics" is shown together with the marginal costs of studying economics. The marginal cost curve is derived from the marginal benefit curve for studying accounting shown in Figure 6.2 "The Marginal Benefits of Studying Accounting". Just as marginal benefit curves generally slope downward, marginal cost curves generally slope upward, as does the one in Figure 6.3 "The Marginal Benefits and Marginal Costs of Studying Economics". In the case of allocating time, the phenomenon of rising marginal cost results from the simple fact that, the more time a person devotes to one activity, the less time is available for another. And the more one reduces the second activity, the greater the forgone marginal benefits are likely to be. That means the marginal cost curve for that first activity rises. Because we now have marginal benefit and marginal cost curves for studying economics, we can apply the marginal decision rule. This rule says that, to maximize the net benefit of an activity, a decision maker should increase an activity up to the point at which marginal benefit equals marginal cost. That occurs where the marginal benefit and marginal cost curves intersect, with 3 hours spent studying economics and 2 hours spent studying accounting. ## Using Marginal Benefit and Marginal Cost Curves to Find Net Benefits We can use marginal benefit and marginal cost curves to show the total benefit, the total cost, and the net benefit of an activity. We will see that equating marginal benefit to marginal cost does, indeed, maximize net benefit. We will also develop another tool to use in interpreting marginal benefit and cost curves. Panel (a) of Figure 6.4 "The Benefits and Costs of Studying Economics" shows the marginal benefit curve we derived in Panel (c) of Figure 6.1 "The Benefits of Studying Economics". The corresponding point on the marginal benefit curve gives the marginal benefit of the first hour of study for the economics exam, 18 points. This same value equals the area of the rectangle bounded by 0 and 1 hour of study and the marginal benefit of 18. Similarly, the marginal benefit of the second hour, 14 points, is shown by the corresponding point on the marginal benefit curve and by the area of the shaded rectangle bounded by 1 and 2 hours of study. The total benefit of 2 hours of study equals the sum of the areas of the first two rectangles, 32 points. We continue this procedure through the fifth hour of studying economics; the areas for each of the shaded rectangles are shown in the graph. Figure 6.4 The Benefits and Costs of Studying Economics Panel (a) shows the marginal benefit curve of Figure 6.1 "The Benefits of Studying Economics". The total benefit of studying economics at any given quantity of study time is given approximately by the shaded area below the marginal benefit curve up to that level of study. Panel (b) shows the marginal cost curve from Figure 6.3 "The Marginal Benefits and Marginal Costs of Studying Economics". The total cost of studying economics at any given quantity of study is given approximately by the shaded area below the marginal cost curve up to that level of study. Two features of the curve in Panel (a) of Figure 6.4 "The Benefits and Costs of Studying Economics" are particularly important. First, note that the sum of the areas of the five rectangles, 50 points, equals the total benefit of 5 hours of study given in the table in Panel (a) of Figure 6.1 "The Benefits of Studying Economics". Second, notice that the shaded areas are approximately equal to the area under the marginal benefit curve between 0 and 5 hours of study. We can pick any quantity of study time, and the total benefit of that quantity equals the sum of the shaded rectangles between zero and that quantity. Thus, the total benefit of 2 hours of study equals 32 points, the sum of the areas of the first two rectangles. Now consider the marginal cost curve in Panel (b) of Figure 6.4 "The Benefits and Costs of Studying Economics". The areas of the shaded rectangles equal the values of marginal cost. The marginal cost of the first hour of study equals zero; there is thus no rectangle under the curve. The marginal cost of the second hour of study equals 2 points; that is the area of the rectangle bounded by 1 and 2 hours of study and a marginal cost of 2. The marginal cost of the third hour of study is 6 points; this is the area of the shaded rectangle bounded by 2 and 3 hours of study and a marginal cost of 6. Looking at the rectangles in Panel (b) over the range of 0 to 5 hours of study, we see that the areas of the five rectangles total 32, the total cost of spending all 5 hours studying economics. And looking at the rectangles, we see that their area is approximately equal to the area under the marginal cost curve between 0 and 5 hours of study. We have seen that the areas of the rectangles drawn with Laurie Phan’s marginal benefit and marginal cost curves equal the total benefit and total cost of studying economics. We have also seen that these areas are roughly equal to the areas under the curves themselves. We can make this last statement much stronger. Suppose, instead of thinking in intervals of whole hours, we think in terms of smaller intervals, say, of 12 minutes. Then each rectangle would be only one-fifth as wide as the rectangles we drew in Figure 6.4 "The Benefits and Costs of Studying Economics". Their areas would still equal the total benefit and total cost of study, and the sum of those areas would be closer to the area under the curves. We have done this for Ms. Phan’s marginal benefit curve in Figure 6.5 "The Marginal Benefit Curve and Total Benefit"; notice that the areas of the rectangles closely approximate the area under the curve. They still “stick out” from either side of the curve as did the rectangles we drew in Figure 6.4 "The Benefits and Costs of Studying Economics", but you almost need a magnifying glass to see that. The smaller the interval we choose, the closer the areas under the marginal benefit and marginal cost curves will be to total benefit and total cost. For purposes of our model, we can imagine that the intervals are as small as we like. Over a particular range of quantity, the area under a marginal benefit curve equals the total benefit of that quantity, and the area under the marginal cost curve equals the total cost of that quantity. Figure 6.5 The Marginal Benefit Curve and Total Benefit When the increments used to measure time allocated to studying economics are made smaller, in this case 12 minutes instead of whole hours, the area under the marginal benefit curve is closer to the total benefit of studying that amount of time. Panel (a) of Figure 6.6 "Using Marginal Benefit and Marginal Cost Curves to Determine Net Benefit" shows marginal benefit and marginal cost curves for studying economics, this time without numbers. We have the usual downward-sloping marginal benefit curve and upward-sloping marginal cost curve. The marginal decision rule tells us to choose D hours studying economics, the quantity at which marginal benefit equals marginal cost at point C. We know that the total benefit of study equals the area under the marginal benefit curve over the range from A to D hours of study, the area ABCD. Total cost equals the area under the marginal cost curve over the same range, or ACD. The difference between total benefit and total cost equals the area between marginal benefit and marginal cost between A and D hours of study; it is the green-shaded triangle ABC. This difference is the net benefit of time spent studying economics. Panel (b) of Figure 6.6 "Using Marginal Benefit and Marginal Cost Curves to Determine Net Benefit" introduces another important concept. If an activity is carried out at a level less than the efficient level, then net benefits are forgone. The loss in net benefits resulting from a failure to carry out an activity at the efficient level is called a deadweight lossThe loss in net benefits resulting from a failure to carry out an activity at the most efficient level.. Figure 6.6 Using Marginal Benefit and Marginal Cost Curves to Determine Net Benefit In Panel (a) net benefits are given by the difference between total benefits (as measured by the area under the marginal benefit curve up to any given level of activity) and total costs (as measured by the area under the marginal cost curve up to any given level of activity). Maximum net benefits are found where the marginal benefit curve intersects the marginal cost curve at activity level D. Panel (b) shows that if the level of the activity is restricted to activity level E, net benefits are reduced from the light-green shaded triangle ABC in Panel (a) to the smaller area ABGF. The forgone net benefits, or deadweight loss, is given by the purple-shaded area FGC. If the activity level is increased from D to J, as shown in Panel (c), net benefits declined by the deadweight loss measured by the area CHI. Now suppose a person increases study time from D to J hours as shown in Panel (c). The area under the marginal cost curve between D and J gives the total cost of increasing study time; it is DCHJ. The total benefit of increasing study time equals the area under the marginal benefit curve between D and J; it is DCIJ. The cost of increasing study time in economics from D hours to J hours exceeds the benefit. This gives us a deadweight loss of CHI. The net benefit of spending J hours studying economics equals the net benefit of studying for D hours less the deadweight loss, or ABC minus CHI. Only by studying up to the point at which marginal benefit equals marginal cost do we achieve the maximum net benefit shown in Panel (a). We can apply the marginal decision rule to the problem in Figure 6.6 "Using Marginal Benefit and Marginal Cost Curves to Determine Net Benefit" in another way. In Panel (b), a person studies economics for E hours. Reading up to the marginal benefit curve, we reach point G. Reading up to the marginal cost curve, we reach point F. Marginal benefit at G exceeds marginal cost at F; the marginal decision rule says economics study should be increased, which would take us toward the intersection of the marginal benefit and marginal cost curves. Spending J hours studying economics, as shown in Panel (c), is too much. Reading up to the marginal benefit and marginal cost curves, we see that marginal cost exceeds marginal benefit, suggesting that study time be reduced. This completes our introduction to the marginal decision rule and the use of marginal benefit and marginal cost curves. We will spend the remainder of the chapter applying the model. ### Heads Up! It is easy to make the mistake of assuming that if an activity is carried out up to the point where marginal benefit equals marginal cost, then net benefits must be zero. Remember that following the marginal decision rule and equating marginal benefits and costs maximizes net benefits. It makes the difference between total benefits and total cost as large as possible. ### Key Takeaways • Economists assume that decision makers make choices in the way that maximizes the value of some objective. • Maximization involves determining the change in total benefit and the change in total cost associated with each unit of an activity. These changes are called marginal benefit and marginal cost, respectively. • If the marginal benefit of an activity exceeds the marginal cost, the decision maker will gain by increasing the activity. • If the marginal cost of an activity exceeds the marginal benefit, the decision maker will gain by reducing the activity. • The area under the marginal benefit curve for an activity gives its total benefit; the area under the marginal cost curve gives the activity’s total cost. Net benefit equals total benefit less total cost. • The marginal benefit rule tells us that we can maximize the net benefit of any activity by choosing the quantity at which marginal benefit equals marginal cost. At this quantity, the net benefit of the activity is maximized. ### Try It! Suppose Ms. Phan still faces the exams in economics and in accounting, and she still plans to spend a total of 5 hours studying for the two exams. However, she revises her expectations about the degree to which studying economics and accounting will affect her scores on the two exams. She expects studying economics will add somewhat less to her score, and she expects studying accounting will add more. The result is the table below of expected total benefits and total costs of hours spent studying economics. Notice that several values in the table have been omitted. Fill in the missing values in the table. How many hours of study should Ms. Phan devote to economics to maximize her net benefit? | | | | | | | | |--------------------------|----|----|----|----|----|-----| | Hours studying economics | 0 | 1 | 2 | 3 | 4 | 5 | | Total benefit | 0 | 14 | 24 | 30 | | 32 | | Total cost | 0 | 2 | 8 | | 32 | 50 | | Net benefit | 0 | 12 | | 12 | 0 | −18 | Now compute the marginal benefits and costs of hours devoted to studying economics, completing the table below. Figure 6.7 Draw the marginal benefit and marginal cost curves for studying economics (remember to plot marginal values at the midpoints of the respective hourly intervals). Do your curves intersect at the “right” number of hours of study—the number that maximizes the net benefit of studying economics? ### Case in Point: Preventing Oil Spills Figure 6.8 © 2010 Jupiterimages Corporation Do we spill enough oil in our oceans and waterways? It is a question that perhaps only economists would ask—and, as economists, we should ask it. There is, of course, no virtue in an oil spill. It destroys wildlife and fouls shorelines. Cleanup costs can be tremendous. However, Preventing oil spills has costs as well: greater enforcement expenditures and higher costs to shippers of oil and, therefore, higher costs of goods such as gasoline to customers. The only way to prevent oil spills completely is to stop shipping oil. That is a cost few people would accept. But what is the right balance between environmental protection and the satisfaction of consumer demand for oil? Vanderbilt University economist Mark Cohen examined the U.S. Coast Guard’s efforts to reduce oil spills through its enforcement of shipping regulations in coastal waters and on rivers. He focused on the costs and benefits resulting from the Coast Guard’s enforcement efforts in 1981. On the basis of the frequency of oil spills before the Coast Guard began its enforcement, Mr. Cohen estimated that the Coast Guard prevented 1,159,352 gallons of oil from being spilled in 1981. Given that there was a total of 824,921 gallons of oil actually spilled in 1981, should the Coast Guard have attempted to prevent even more spillage? Mr. Cohen estimated that the marginal benefit of preventing one more gallon from being spilled was $7.27 ($3.42 in cleanup costs, $3 less in environmental damage, and$0.85 worth of oil saved). The marginal cost of preventing one more gallon from being spilled was \$5.50. Mr. Cohen suggests that because the marginal benefit of more vigorous enforcement exceeded the marginal cost, more vigorous Coast Guard efforts would have been justified. More vigorous efforts have, indeed, been pursued. In 1989, the Exxon oil tanker Exxon Valdez ran aground, spilling 10.8 million gallons of oil off the coast of Alaska. The spill damaged the shoreline of a national forest, four national wildlife refuges, three national parks, five state parks, four critical habitat areas, and a state game refuge. Exxon was ordered to pay $900 million in damages; a federal jury found Exxon and the captain guilty of criminal negligence and imposed an additional$5 billion in punitive damages. In 2008, The Supreme Court reduced the assessment of punitive damages to \$507 million, with the majority arguing that the original figure was too high in comparison to the compensatory damages for a case in which the actions of the defendant, Exxon, were “reprehensible” but not intentional. Perhaps the most important impact of the Exxon Valdez disaster was the passage of the Oil Pollution Act of 1990. It increased shipper liability from $14 million to$100 million. It also required double-hulled tankers for shipping oil. The European Union (EU) has also strengthened its standards for oil tankers. The 2002 breakup of the oil tanker Prestige off the coast of Spain resulted in the spillage of 3.2 million gallons of oil. The EU had planned to ban single-hulled tankers, phasing in the ban between 2003 and 2015. The sinking of the Prestige led the EU to move up that deadline. Spill crises have led both the United States and the European Union to tighten up their regulations of oil tankers. The result has been a reduction in the quantity of oil spilled, which was precisely what economic research had concluded was needed. By 2002, the amount of oil spilled per barrel shipped had fallen 30% from the level three decades earlier. Sources: Mark A. Cohen, “The Costs and Benefits of Oil Spill Prevention and Enforcement,” Journal of Environmental Economics and Management 13(2) (June 1986): 167–188; Rick S. Kurtz, “Coastal Oil Pollution: Spills, Crisis, and Policy Change,” Review of Policy Research, 21(2) (March 2004): 201–219; David S. Savage, “Justices Slash Exxon Valdez Verdict,” Los Angeles Times, June 26, 2008, p. A1; and Edwin Unsworth, “Europe Gets Tougher on Aging Oil Tankers,” Business Insurance, 36(48) (December 2, 2002): 33–34. ### Answer to Try It! Problem Here are the completed data table and the table showing total and marginal benefit and cost. Figure 6.9 Ms. Phan maximizes her net benefit by reducing her time studying economics to 2 hours. The change in her expectations reduced the benefit and increased the cost of studying economics. The completed graph of marginal benefit and marginal cost is at the far left. Notice that answering the question using the marginal decision rule gives the same answer. Close Search Results Study Aids Need Help? Talk to a Flat World Knowledge Rep today: • 877-257-9243 • Live Chat • Contact a Rep Monday - Friday 9am - 5pm Eastern We'd love to hear your feedback! Leave Feedback! Edit definition for #<Bookhub::ReaderController:0x0000000ee00250> show #<Bookhub::ReaderReporter:0x0000000f890648> 6282 To get digital downloads, you can buy the All Access Pass I am an Instructor Continue reviewing this book online. Already registered? I am a Student Visit the course page for this book to see affordable Already purchased this book?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944037139415741, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/158665-conditinal-probability-numbers-selected-random.html
# Thread: 1. ## Conditinal Probability with numbers selected at random Suppose that three numbers are selected one by one, at random, and without replacement from the set of numbers {1, 2, 3, ... n}. What is the probability that the third number falls between the first two if the first is smaller than the second? The book tells me that the answer is 1/3, but that isn't much help towards how to find the answer. 2. Hello, Zennie! $\text{Suppose that three numbers are selected randomly without replacement}$ $\text{from the set of numbers }\{1,\,2,\,3,\,\hdots\, n\}.$ $\text{What is the probability that the third number falls between the first two}$ $\text{if the first is smaller than the second?}$ $\text{The book answer is: }\frac{1}{3}$ Let the three numbers be: . $a,\,b,\,c$ There are $3! = 6$ permutations of the three numbers: . . $abc,\:acb,\:bac,\:bca,\:cab,\:cba$ . (in increasing order) "The first is less than the second": . $a < b$ . . There are three cases: . $abc,\:acb,\:cab$ In only one case, $\,c$ is between $\,a$ and $\,b$: . $acb$ Therefore, the probability is: . $\dfrac{1}{3}$ 3. Wow, thanks a lot. That's so much more simple than I was expecting. I probably wouldn't have thought about it that way. 4. It is not such a tough question. Try using this hint: Pick three number (in order) - x1,x2,x3 from {1,2,3,.....,n} What is the Prob(x1<x2<x3)? [You don't need any calculation - think symmetry !!] Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9646236300468445, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/63974/is-every-homology-theory-given-by-a-spectrum/63975
## Is every homology theory given by a spectrum? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $E$ be a spectrum. For any CW complex $X$, define $h_*=\pi_i(E\wedge X)$. Then we know that $h_*$ form a homology theory. In other words, there functors satisfy the homotopy invariance, maps a cofiber sequence of spaces to a long exact sequence of abelian groups, also satisfy the wedge axiom in the definition of a homology theory. I want to know the converse case. Is every homology theory given by a spectrum in such way? Thanks for all your comments. This is not really a problem. Anybody knows how to close it? - 3 A reference is probably best: see Switzer 14.35-36 for homology representation, which relies on various cohomology representation results from chapter 9 --- say, 9.21 and onward. – Eric Peterson May 5 2011 at 6:42 @yeshengkui: There is no need to close questions that have been satisfactorily answered, as these are unlikely to attract new answers and hence keep bubbling up to the top of the front page. – Mark Grant May 6 2011 at 13:43 ## 2 Answers For homology theories on CW-complexes or homology theories that map weak equivalences to isomorphisms, that's Brown's representability theorem, which you can find in any textbook on stable homotopy theory. You forgot the important axiom of excision, by the way. The short answer is yes. - 6 To fill in a step: Brown's theorem applies to cohomology theories. To pass from cohomology to homology you use duality, observing that $S$-duality sets up a one to one correspondence between homology theories and cohomology theories (for finite complexes) in which $\pi_*(-\wedge E)$ corresponds to $\pi_*Map(-,E)$. – Tom Goodwillie May 5 2011 at 15:05 I see. I thought the homological one also went by the name of Brown rep. thm. – Tilman May 5 2011 at 18:49 Well, I think some people call it that, but I believe that what Brown proved was a theorem about which contravariant functors from the homotopy category of spaces to sets are representable, which is used to show that cohomology theories are representable. – Tom Goodwillie May 6 2011 at 0:40 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The answer is yes, if you replace the wedge axiom with the stronger direct limit axiom $h_{i}(X) = \mathrm{lim}\ h_{i}(X_{\alpha})$, where $X$ is the direct limit of subcomplexes $X_{\alpha}$. As well as Switzer, this is discussed in Chapter 4.F of Hatcher's "Algebraic Topology", Adams' little blue book "Stable homotopy and generalised homology", and Adams' paper "A variant of E. H. Brown's representability theorem". -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354486465454102, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/5457/list
## Return to Question Post Made Community Wiki by Harry Gindi 2 edited tags 1 # $\omega$-topos theory? I've been reading through Lurie's book on higher topos theory, where he develops the theory of $(\infty,1)$-toposes, which leads me to the following question: Is there any sort of higher topos theory on the more general $\omega$-categories, where we don't require all higher morphisms to be invertible?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9148585796356201, "perplexity_flag": "head"}
http://www.physicsforums.com/showpost.php?p=3656415&postcount=2
View Single Post Recognitions: Homework Help Hi Ad123q! Quote by Ad123q Hi, Was wondering if anyone could give me a hand. I need to prove that the Cayley Transform operator given by U=(A-i)(A+i)^-1 is UNITARY, ie that UU*=U*U=I where U* is the adjoint of U (I am given also that A=A* in the set of bounded operators over a Hilbert space H). My solution so far, is this correct? U=(A-i)(A+i)^-1 so (U)(x) = (A-i)((A+i)^-1)x (U acting on an x) Then (Ux,y)= {INTEGRAL}(A-i)((A+i)^-1)x y(conjugate) dx (1) = {INTEGRAL}x(A-i)((A+i)^-1)(both conjugate)y(all three conjugate) dx (2) =(x,U*y) and so deduce (U*)(y) = (A+i)((A-i)^-1)y and so the adjoint of U is U*=(A+i)(A-i)^-1 It can then be checked that UU*=U*U=I How do you conclude this from your expression for U*? Btw, instead of using the integral, can't you simply use the properties of the adjoint operator? That is, $(AB)^*=B^*A^*$ and $(A^{-1})^*=(A^*)^{-1}$? Quote by Ad123q As you can see my main query is the mechanism of finding the adjoint of U for the given U. For clarity in step (1) it is just the y which is conjugated, and in step (2) it is (A-i)(A+i)^-1 which is conjugated and then also the whole of (A-I)((A+i)^-1)y which is also conjugated. Sorry if my notation is confusing, if unsure just ask. Thanks for your help in advance!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9069947004318237, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/108622-boolean-ring.html
# Thread: 1. ## Boolean Ring For the case a^2 = a (a belongs to R), I already showed that a+a = 0 for all a in R What can you say a ring in which for all a in R, a^3 = a and more generally, for all a in R, a^n = a where n >= 4? 2. Originally Posted by knguyen2005 For the case a^2 = a (a belongs to R), I already showed that a+a = 0 for all a in R What can you say a ring in which for all a in R, a^3 = a and more generally, for all a in R, a^n = a where n >= 4? well, if $a^n = a,$ for all $a \in R,$ then $(2^n -2)a=0, \ \forall a \in R.$ because for any $a \in R: 2^n a = (2a)^n = 2a.$ that's jsut trivial! a non-trivial (even hard probably) question is this: suppose $R \neq (0)$ is a ring with identity element and $n \geq 3$ is the smallest integer for which $a^n = a, \ \forall a \in R.$ what is the smallest integer $m$ for which $ma=0, \ \forall a \in R$? for example if $n=3,$ would it be possible to have $m=2$ or $3$? 3. Thanks for a quick reply If we use the formula Then for the case n = 2, we have : 2a = 0 implies that a + a = 0. So the formula works when n = 3, 6a = 0 hence 3a+3a = 0 But when I do the calculation: Let a in R, a^3 = a ad so (-a)^3 = (-a) by hypothesis But (-a)^3 = (-a)(-a)(-a) = (a^2)(-a) Thus, (-a) = (a^2)(-a) hence a^2 = 1. What am I doing wrong here? Why did I not get the result 6a = 0 when n = 3? Thanks again 4. Originally Posted by knguyen2005 Thus, (-a) = (a^2)(-a) hence a^2 = 1. how did you get that? also we can't cancel $-a$ out because $-a$ is not necessarily a unit in R. 5. I knew my calculation was wrong, I just want to do another way to get to the answer. If I dont want to use the formula when n = 3. How am I supposed to get to the answer 6a = 0 ? I mean can we do it another way instead of the formula u gave? Thank you so much for your valuable time 6. Originally Posted by knguyen2005 I knew my calculation was wrong, I just want to do another way to get to the answer. If I dont want to use the formula when n = 3. How am I supposed to get to the answer 6a = 0 ? I mean can we do it another way instead of the formula u gave? Thank you so much for your valuable time $a+a=(a+a)^3 = (2a)^3=8a^3=8a.$ so $8a=2a$ and thus $6a=0.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504246711730957, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/178019-consistency-matrix.html
# Thread: 1. ## Consistency of a matrix Hi guys, My problem states: Let $A$ be an augmented matrix of size m x m+1 (that is to say, the coefficient matrix of the system is square and of size m x m), and $A$ has all non-zero entires. Prove or disprove that this system is consistent and, in fact, has one and only one solution. Any quick hints as to how to start this problem? Thanks! James 2. Originally Posted by james121515 Hi guys, My problem states: Let $A$ be an augmented matrix of size m x m+1 (that is to say, the coefficient matrix of the system is square and of size m x m), and $A$ has all non-zero entires. Prove or disprove that this system is consistent and, in fact, has one and only one solution. Any quick hints as to how to start this problem? Thanks! James If you are unsure whether or not a result is true, the best way to start is by experimenting with simple examples. In this case, take m=2, so that you are looking at 2x3 matrices. If such a matrix has all nonzero entries, does it necessarily represent a consistent system with only one solution?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451779723167419, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/27896/differential-equations
# Differential Equations How would I solve these differential equations? Thanks so much for the help! $$P'_0(t) = \alpha P_1(t) - \beta P_0(t)$$ $$P'_1(t) = \beta P_0(t) - \alpha P_1(t)$$ We also know $P_0(t)+P_1(t)=1$ - ## 3 Answers Note that from the equation you have $$P'_0(t) = \alpha P_1(t) - \beta P_0(t) = -P'_1(t)$$ which gives us $P'_0(t) + P'_1(t) = 0$ which gives us $P_0(t) + P_1(t) = c$. We are given that $c=1$. Use this now to eliminate one in terms of the other. For instance, $P_1(t) = 1-P_0(t)$ and hence we get, $$P'_0(t) = \alpha (1-P_0(t)) - \beta P_0(t) \Rightarrow P'_0(t) = \alpha - (\alpha + \beta)P_0(t)$$ Let $Y_0(t) = e^{(\alpha + \beta)t}P_0(t) \Rightarrow Y'_0(t) = e^{(\alpha + \beta)t} \left[P'_0(t) + (\alpha + \beta) P_0(t) \right] = \alpha e^{(\alpha + \beta)t}$ Hence, $Y_0(t) = \frac{\alpha}{\alpha + \beta}e^{(\alpha + \beta)t} + k$ i.e. $$P_0(t) = \frac{\alpha}{\alpha + \beta} + k e^{-(\alpha+\beta)t}$$ $$P_1(t) = 1 - P_0(t) = \frac{\beta}{\alpha + \beta} - k e^{-(\alpha+\beta)t}$$ - Thank you so much! Much appreciated. – icobes Mar 19 '11 at 2:09 One question: I don't follow how you get a + ke^−(α+β)t term for P_0(t). Wasn't Y_0(t) a product? Thanks. – icobes Mar 19 '11 at 2:15 Use $P_0(t)+P_1(t)=1$ to turn it into $P_0'(t)=\alpha-(\alpha+\beta)P_0(t)$, which you should be able to solve. - Could you give me a hint on how to solve P_0′(t)=(1−α−β)P_0(t)? I've never learned how to solve differential equations before and this was a question for a probability class so that is why I am completely lost. Thanks so much! – icobes Mar 19 '11 at 2:05 – Ross Millikan Mar 19 '11 at 2:09 There is a general method to solve such equations, if we view them as a linear system of equation $$y'(x) = A y(x)$$ When $A$ is a matrix with constants, the solution can be written in terms of the exponent matrix $e^{Ax}$. More details can be found here. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420011043548584, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/10217/list
## Return to Answer 3 added 2 characters in body Let $K$ be a number field and $F$ a polynomial with coefficients in $K$. Class field theory shows that the following properties of $F$ are equivalent: (1) There is subgroup $H$ of a generalized ideal class group $C_{\mathfrak m}$ (of some appropriately chosen conductor $\mathfrak m$) and a finite set of prime ideals $S$ (including all primes involved in the denominators of $F$) such that $F$ splits completely modulo $\wp$ (for $\wp \not\in S$) if and only if the class of $\wp$ in $C_{\mathfrak m}$ lies in $H$. (2) The splitting field of $F$ over $K$ is an abelian extension. In Shimura's example, the field $K$ is $\mathbf Q$, the conductor is $7$, the set $S$ is ${7}$, \{7\}$, and the splitting field is the degree 3 extension of$\mathbf Q$contained in${\mathbf Q}(\zeta_7)\$. So in general, to construct $F$ of the type considered by Shimura, one must construct abelian extensions of number fields $K$. The theory of complex multiplication is one tool that allows one to do this. 2 added 22 characters in body Let $K$ be a number field and $F$ a polynomial with coefficients in $K$. Class field theory shows that the following properties of $F$ are equivalent: (1) There is subgroup $H$ of a generalized ideal class group $C_{\mathfrak m}$ (of some appropriately chosen conductor $\mathfrak m$) and a finite set of prime ideals $S$ (including all primes involved in the denominators of $F$) such that $F$ splits completely modulo $\wp$ (for $\wp \not\in S$) if and only if the class of $\wp$ in $C_{\mathfrak m}$ lies in $H$. (2) The splitting field of $F$ over $K$ is an abelian extension. In Shimura's example, the field $K$ is $\mathbf Q$, the conductor is $7$, the set $S$ is ${7}$, and the splitting field is the degree 3 extension of $\mathbf Q$ contained in ${\mathbf Q}(\zeta_7)$. So in general, to construct $F$ of the type considered by Shimura, one must construct abelian extensions of number fields $K$. The theory of complex multiplication is one tool that allows one to do this. 1 Let $K$ be a number field and $F$ a polynomial with coefficients in $K$. Class field theory shows that the following properties of $F$ are equivalent: (1) There is subgroup $H$ of a generalized ideal class group $C_{\mathfrak m}$ (of some appropriately chosen conductor $\mathfrak m$) and a finite set of prime ideals $S$ (including all primes involved in the denominators of $F$) such that $F$ splits completely modulo $\wp$ if and only if the class of $\wp$ in $C_{\mathfrak m}$ lies in $H$. (2) The splitting field of $F$ over $K$ is an abelian extension. In Shimura's example, the field $K$ is $\mathbf Q$, the conductor is $7$, the set $S$ is ${7}$, and the splitting field is the degree 3 extension of $\mathbf Q$ contained in ${\mathbf Q}(\zeta_7)$. So in general, to construct $F$ of the type considered by Shimura, one must construct abelian extensions of number fields $K$. The theory of complex multiplication is one tool that allows one to do this.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305635690689087, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/275522/dual-notion-of-free-module
# Dual notion of free module I was just introduced to the notion of projective module (through Matsumura, Commutative ring theory), and there it is said that the reason that there is no easy description for injective modules (along the lines of "projective modules are direct summands of injective modules") is that there is no notion dual to free module. Could somebody enlighten me about the difficulty of such construction, i.e., why would it be meaningless to reverse the arrows in the universal property definition of free module? - 2 – Qiaochu Yuan Jan 11 at 0:32 1 – Martin Jan 11 at 1:25 ## 1 Answer Caveat lector: this answers your last question "why would it be meaningless to reverse the arrows in the universal property definition of free module?", regardless of the fact that this is not exactly what you want for your purposes: see Qiaochu's comment below. The universal property of a free object (e.g. free module) expresses the fact that the forgetful functor has a left adjoint, the free functor. A cofree functor, the one that gives you cofree objects, i.e. those who satisfy the dual universal property to free objects, would be a right adjoint to the forgetful functor. Alas, the forgetful functor does not have a right adjoint in the category of modules over a ring, because it would have to preserve colimits, and it doesn't (e.g. it doesn't preserve coproducts). Hence there is no "cofree functor". - This isn't even the real problem. What you want is not a right adjoint to the forgetful functor $C \to \text{Set}$ but a left adjoint to a different forgetful functor $C^{op} \to \text{Set}$ (to get projective objects in $C^{op}$, which are injective objects in $C$). – Qiaochu Yuan Jan 11 at 0:34 @Qiaochu: Thank you for the heads-up, I've added a notice. – Bruno Stonek Jan 11 at 0:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9112304449081421, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/160019-limits-function-complex-numbers.html
Thread: 1. Limits of Function with complex numbers I am studying for an exam and came across this excercise... Let $f: D\rightarrow C$ (complex numbers). Suppose a is a limit point of D. If $f(x)\rightarrow b$ as $x\rightarrow a$, then $Re(f(x))\rightarrow Re(b)$ as $x\rightarrow a$. So we have that if $0<|x-a|<\delta \Longrightarrow |f(x)-b|<\epsilon$, then, $0<|x-a|<\delta \Longrightarrow |Re(f(x)-Re(b)|<\epsilon$. However, I'm not sure how to use this to find the apropriate values for epsilon and delta that makes this true. Any help would be appreciated. 2. You have post some serious problems. http://www.mathhelpforum.com/math-he...ial-19060.html So Why not learn to post in symbols? You can use LaTeX tags. You are more likely to receive serious attention if your posting are easy to read. 3. I am studying for an exam and came across this excercise... Let $f: D\rightarrow C$ (complex numbers). Suppose a is a limit point of D. If $f(x)\rightarrow b$ as $x\rightarrow a$, then $Re(f(x))\rightarrow Re(b)$ as $x\rightarrow a$. So we have that if $0<|x-a|<\delta \Longrightarrow |f(x)-b|<\epsilon$, then, $0<|x-a|<\delta \Longrightarrow |Re(f(x)-Re(b)|<\epsilon$. However, I'm not sure how to use this to find the apropriate values for epsilon and delta that makes this true. Any help would be appreciated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395637512207031, "perplexity_flag": "head"}
http://nrich.maths.org/2127/index?nomenu=1
## 'A Mixed-up Clock' printed from http://nrich.maths.org/ ### Show menu There is a clock-face where the numbers have become all mixed up. Can you find out where all the numbers have got to from the ten statements below? Here is a clock-face with letters to mark the position of the numbers so that the statements are easier to read and to follow. 1. No even number is between two odd numbers. 2. No consecutive numbers are next to each other. 3. The numbers on the vertical axis (a) and (g) add to $13$. 4. The numbers on the horizontal axis (d) and (j) also add to $13$. 5. The first set of $6$ numbers [(a) - (f)] add to the same total as the second set of $6$ numbers [(g) - (l)] . 6. The number at position (f) is in the correct position on the clock-face. 7. The number at position (d) is double the number at position (h). 8. There is a difference of $6$ between the number at position (g) and the number preceding it (f). 9. The number at position (l) is twice the top number (a), one third of the number at position (d) and half of the number at position (e). 10. The number at position (d) is $4$ times one of the numbers adjacent (next) to it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8823390603065491, "perplexity_flag": "middle"}
http://cs.stackexchange.com/questions/20/evaluating-the-average-time-complexity-of-a-given-bubblesort-algorithm
# Evaluating the average time complexity of a given bubblesort algorithm. Considering this pseudo-code of a bubblesort: ````FOR i := 0 TO arraylength(list) STEP 1 switched := false FOR j := 0 TO arraylength(list)-(i+1) STEP 1 IF list[j] > list[j + 1] THEN switch(list,j,j+1) switched := true ENDIF NEXT IF switched = false THEN break ENDIF NEXT ```` What would be the basic ideas I would have to keep in mind to evaluate the average time-complexity? I already accomplished calculating the worst and best cases, but I am stuck deliberating how to evaluate the average complexity of the inner loop, to form the equation. The worst case equation is: $$\sum_{i=0}^n \left(\sum_{j=0}^{n -(i+1)}O(1) + O(1)\right) = O(\frac{n^2}{2} + \frac{n}{2}) = O(n^2)$$ in which the inner sigma represents the inner loop, and the outer sigma represents the outer loop. I think that I need to change both sigmas due to the "if-then-break"-clause, which might affect the outer sigma but also due to the if-clause in the inner loop, which will affect the actions done during a loop (4 actions + 1 comparison if true, else just 1 comparison). For clarification on the term average-time: This sorting algorithm will need different time on different lists (of the same length), as the algorithm might need more or less steps through/within the loops until the list is completely in order. I try to find a mathematical (non statistical way) of evaluating the average of those rounds needed. For this I expect any order to be of the same possibility. - 6 you first need to define what average even means. Since the algorithm is deterministic, you'd have to assume some kind of distribution over inputs. – Suresh Mar 6 '12 at 20:59 @Sim Can you show how you computed the worst-case time complexity? Then, we might get an idea on what you mean by average complexity in your case. – Sunil Mar 6 '12 at 21:04 I mean average-time in the way of the most-likely time needed (or in other words the 'pure' mathematical version of: the mean of all times observed doing a statistical analysis). For example quicksort does have an average of nlogn even though its worst case is n^2. – Sim Mar 6 '12 at 21:07 1 @Sim In the case of bubble sort average case = worst case time complexity, meaning, Average case Time complexity is also $n^2$ – Sunil Mar 6 '12 at 21:20 3 There's a difference. quicksort is averaged "over the choice of coin tosses when choosing a pivot" which has nothing to do with the data. Whereas you are implying that you want to average "over all inputs" which assumes (for example) that you expect each ordering of the input to occur with the same probability. that's reasonable, but it should be stated explicitly. – Suresh Mar 6 '12 at 21:21 show 11 more comments ## 3 Answers For lists of length $n$, average usually means that you have to start with a uniform distribution on all $n!$ permutations of [$1$, .., $n$]: that will be all the lists you have to consider. Your average complexity would then be the sum of the number of step for all lists divided by $n!$. For a given list $(x_i)_i$, the number of steps of your algorithm is $nd$ where $d$ is the greatest distance between a element $x_i$ and his rightful location $i$ (but only if it has to move to the left), that is $\max_i(\max(1,i-x_i))$. Then you do the math: for each $d$ find the number $c_d$ of lists with this particular maximal distance, then the expected value of $d$ is: $$\frac1{n!}\ \sum_{d=0}^n{\ dc_d}$$ And that's the basic thoughts without the hardest part which is finding $c_d$. Maybe there is a simpler solution though. EDIT: added `expected' - If you consider a normal distribution, is there a way to approximate $c_d$ ? – Sim Mar 6 '12 at 21:48 You can say $c_d≥(n+1-d)(d-1)!$ because you can mingle anywhere all the permutations of [$2$, .., $d$] and append $1$ at the end but that's to small to prove $n²$ in average. – jmad Mar 6 '12 at 22:15 Recall that a pair $(A[i], A[j])$ (resp. $(i,j)$) is inverted if $i < j$ and $A[i] > A[j]$. Assuming your algorithm performs one swap for each inversion, the running time of your algorithm will depend on the number of inversions. Calculating the expected number of inversions in a uniform random permutation is easy: Let $P$ be a permutation, and let $R(P)$ be the reverse of $P$. For example, if $P = 2,1,3,4$ then $R(P) = 4,3,1,2$. For each pair of indices $(i,j)$ there is an inversion in exactly one of either $P$ or $R(P)$. Since the total number of pairs is $n(n-1)/2$, and the total number and each pair is inverted in exactly half of the permutations, assuming all permutations are equally likely, the expected number of inversions is: $$\frac{n(n-1)}{4}$$ - this evaluates the amount of inversions. but how about the amount of comparisons which depends on the time the break-clause is stepped in – Sim Mar 6 '12 at 22:20 You get one comparison by swap and most importantly one swap can reduce the number of inversions by at most one. – jmad Mar 6 '12 at 22:24 not every comparison results in a swap, if the if-clause is false, no inversion is done. – Sim Mar 6 '12 at 22:31 @rgrig If you provide a counter-example, then I will correct my answer. – Joe Mar 7 '12 at 0:18 @Joe: I removed my comment. It was wrong. – rgrig Mar 7 '12 at 7:06 show 1 more comment Number of swaps < Number of iterations (in both optimized as well as simple bubble case scenario) Number of Inversions = Number of swaps. Therefore, Number of Iterations > $\frac{n(n-1)}{4}$ Thus, Average case complexity is $\omega(n^2)$. But, since average case cant exceed worst case, we get that it is $O(n^2)$, This gives us : Average Time = $\theta(n^2)$ (Time complexity = Number of iteration no. of iterations > no. of swaps) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9150987267494202, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/105612?sort=votes
## The distance between the centroid of $P$ points and the centroid of a subset of the points ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Imagine I have an $(p_1, ..., p_N) \in P$ points, on a two-dimensional plane, patterned in a rectangular or hexagonal lattice arrangement in a circle of radius $R_c$, with a spacing between the points of $r_s$. Let $C_P$ be the centroid of the $P$ points. If I randomly select some subset of $k$ points from $P$, and I compute the centroid of these $k$ points, $C_k$, what is the probability that the distance between $C_p$ and $C_k$ is $\leq D$? Update: I have specified that the $P$ points in the circle are in a rectangular or hexagonal lattice arrangement (whichever is easiest to analyze). - The optimal circle packings in a disk are complicated and not known in general, and I doubt you really care about that level of complexity. The exact probability for small $D$ will depend on the exact optimal configuration. If you instead let the $P$ points be IID uniformly random, then you can say something. You can also compare the centroid of $k$ points with the center of the disk. One tool for this is the Central Limit Theorem, although you may want to use an effective version such as the Berry-Esseen Theorem or a large deviations result to get a bound for a particular $k.$ – Douglas Zare Aug 27 at 13:28 @Douglas Zare I have updated the question to specify that the $P$ points should be patterned in a rectangular or hexagonal lattice arrangement (whichever is easiest to analyze). Also, I do care about circle packing, it's really interesting, but it was inappropriate for me to confound this question with a very difficult geometry (et. al.?) problem. – CKura Aug 27 at 14:44 ## 1 Answer I might not be understanding the question, but the centroid is just the mean of the sample, so for mildly large samples from a mildly large set it will be (bivariate) normally distributed, and all the statistics are easy to compute. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922673225402832, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/207294/ordinary-differential-equation-strategy?answertab=oldest
Ordinary Differential Equation Strategy In one of the examples in the Differential Equations for Dummies Workbook (Holzner), you are asked to use an integrating factor to solve $$\frac{dy}{dx} +2y =4$$ My question is, is this the most efficient way to solve it? Can't I also solve it by separating the $y$, turning the equation into $\frac{1}{2-y}\frac{dy}{dx} = 2$. Are there other ways? How do you quickly determine what will be the quickest way? - 3 Answers The quickest way to solve this is to note $\lambda+2=0$ gives $\lambda=-2$ hence $y_h = e^{-2x}$ and eyeballin-it shows $y_p = 2$ hence $y = c_1e^{-2x}+2$. - Can you please explain that further? What does λ represent? – Imray Oct 4 '12 at 17:36 Sure, when we solve a constant coefficient ode we can solve it by looking at the characteristic equation. For example, $y''-y=0$ gives $\lambda^2-1=0$ hence $\lambda = \pm 1$ which tells me $y = c_1e^t+c_2e^{-t}$. Perhaps you will study this soon in your course. The other part $y_p$ is called the particular solution. If you haven't studied these yet then separation of variables is probably better, or the integrating factor method for more complicated linear odes. – James S. Cook Oct 4 '12 at 17:41 – James S. Cook Oct 4 '12 at 17:44 1 – James S. Cook Oct 4 '12 at 17:47 Awesome thanks! I'm reading up on it now... – Imray Oct 4 '12 at 17:59 show 2 more comments Another to solve it would be to use the characterstic equation which is what the above answer used: z^2 + 2z - 4 = 0. You can factor this equation and the general solution is would by using the general formula provided in any ODE text. However, integrating factors do not require you to remember a formula and in most situations this technique is more elegant and quick. - That characteristic equation is wrong... – Hans Lundmark Oct 21 '12 at 11:16 You can separate variables, thus: $$\frac{dy}{dx} = 4-2y$$ $$\frac{dy}{2-y} = 2\,dx$$ $$-\log|2-y| = 2x+\text{constant}$$ $$|2-y|=e^{-2x-\text{constant}} = (e^{-2x}\cdot(\text{positive constant}))$$ $$2-y = e^{-2x}\cdot\text{constant}$$ $$y = 2-(e^{-2x}\cdot\text{constant})$$ As for integrating factors, notice that you have $y'+2y$ and after multiplying by some factor---call it $w$---you have $wy'+(2w)y$ and you want $wy'+w'y$, so that it becomes $(wy)'$. So you need $w'=2w$. That's a differential equation, one of whose solutions is $w=e^{2x}$. And you only need one of its solutions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329698085784912, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/36071/how-to-make-the-projected-image-smaller-by-adding-one-ore-more-lenses-in-front-o
# How to make the projected image smaller by adding one ore more lenses in front of the built in projector lense? I have a projector that creates a large image, even if the distance to the screen is short. The device is very small, approximateley 10x10 cm if you look from above. The height is only 3 cm. I could remove and replace the builtin "lense pack" to make the size of the projected image smaller, but I'd prefer to keep the device unchanged. What type(s) of lense(s) in front of the unchanged device would be reqired to make the image dramatically smaller? I have measured the following with the original device. If the distance d from the focal point to the screen is $300\,cm$, the width $w_1$ of the image on the screen is $\approx200\,cm$. And if d is as far as $900\,cm$ then $w_1= 600\,cm$. What can I do to make the image smaller, so it's new width $w_2$ would be approx. $100\,cm$ at a distance $d= 900\,cm$? Lenses, mirrors or prisms would be OK, but I somehow guess 1 or 2 lenses would do the trick. What type of lenses would this be, how do I choose and calculate the type and the parameters of the lense? Can you teach me how to choose and understand the formula? The quality of the image and the resolution should still be good afterwards. Of course I can have a $1\,m$ image if I move the projector close enough, but that is not the solution I am looking for. I tried also to find this out by myself already. I visited two camera stores and talked with the owners. I also visited optician retail stores. But anybody there could answer this question. I have some basic knowledge of physics and also tried to solve this by reading books about optics and geometrical optics, but without any success. I also tried with some lenses I could get my hands on but I could not solve it. The problem I also had to deal with was that as soon as I moved the lenses a little further away from the device, they were too small to catch the whole beam. - ## 2 Answers A single lens with the object located at a distance greater than 2*f produces a smaller image (upside down). http://www.phys.hawaii.edu/~teb/java/ntnujava/Lens/lens_e.html A doublet can also reduce the size: http://ressources.univ-lemans.fr/AccesLibre/UM/Pedago/physique/02/optigeo/doublet2.html - Upside down is fine. The projector can switch the image accordingly. – SHernandez Sep 10 '12 at 13:47 Thank you for the links. I will take some time and try and understand what they can teach me. – SHernandez Sep 10 '12 at 13:49 Just move the object on the left with the mouse and you see the image on the right. You just have to place your screen where the image is. – Shaktyai Sep 10 '12 at 13:50 You might find this prohibitively expensive. Two reasons: • a simple lens will not be good enough, because it will introduce chromatic aberration: divergence of rays of different colors. This is because the glass's index of refraction is slightly different for different wavelengths (this is called "dispersion"). You will need to use an achromatic doublet or triplet, which is two or three lenses made of different kinds of glass specially tailored to cancel out each other's dispersion. • as you found out, the lens has to be big enough to catch the entire beam, and so will require a lot of glass. You can mitigate this problem by placing the lens closer to the projector, but then you will need a shorter focal length; in addition to making the chromatic aberration worse, it will also introduce spherical aberration, which is distortion of the image in simple lenses of small focal lengths. To prevent this, you will need a aspheric lens, which is ground in a complicated shape and also expensive. - One suggestion which might make this doable for a low cost: look into using a camera lens from an old (pre digital) SLR camera. These can be found very cheaply on ebay and craigslist. If you have a friend with a nice camera see if you can borrow a zoom lens, you may be able to find a configuration which does what you want. – user2963 Sep 10 '12 at 16:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587766528129578, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/69576/sum-of-squares-modulo-a-prime/69577
## Sum of squares modulo a prime ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is the probability that the sum of squares of n randomly chosen numbers from $Z_p$ is a quadratic residue mod p? That is, let $a_1$,..$a_n$ be chosen at random. Then how often is $\Sigma_i a^2_i$ a quadratic residue? - 2 Speaking as a non-expert, I should think that for n much larger than, say, log(p) (or maybe even 4), the probability should approach 1/2. Are you interested in the answer for n << p, or p << n, or is there some other relationship between n and p which would make answering the question easier? Gerhard "Email Me About System Design" Paseman, 2011.07.05 – Gerhard Paseman Jul 5 2011 at 22:34 Also, I assume p is prime from your title, but it would help to call that out in the body of the question. Gerhard "Email Me About System Design" Paseman, 2011.07.05 – Gerhard Paseman Jul 5 2011 at 22:37 Yes, $p$ prime — and odd — is implicit both in the title and in the use of "quadratic residue". – Noam D. Elkies Jul 5 2011 at 22:53 This is my intuition as well. n should be at most polynomial in log(p) and p is meant to be a prime. Is there a standard theorem for this? – unknown (google) Jul 5 2011 at 22:54 As you can see, if $p \rightarrow \infty$ then already $n=3$ is enough to get near-equidistribution (and $n=2$ fails only because 0 is over- or under-represented depending on whether $p$ is $+1$ or $−1 \bmod 4$). – Noam D. Elkies Jul 6 2011 at 2:36 ## 4 Answers This probability can be calculated exactly, and indeed it approaches $1/2$ rather quickly — more precisely, for each $p$ it approaches the fraction $(p-1)/(2p)$ of quadratic residues $\bmod p$. This can be proved by elementary means, but perhaps the nicest way to think about it is that if you choose $n$ numbers $a_i$ independently and sum $a_i^2 \bmod p$, the resulting distribution is the $n$-th convolution power of the distribution of a random single square — so its discrete Fourier transform is the $n$-th power of the D.F.T., call it $\gamma$, of the distribution of $a^2 \bmod p$. For this purpose $\gamma$ is normalized so $\gamma(0)=1$. Then for $k \neq 0$ we have $\gamma(k) = (k/p) \gamma(1)$ [where $(\cdot/p)$ is the Legendre symbol], and ```$$ p \gamma(1) = \sum_{a \bmod p} \exp(2\pi i a^2/p), $$``` which is a Gauss sum and is thus a square root of $\pm p$. It follows that $|\gamma(k)| = p^{-1/2}$, from which we soon see that each value of the convolution approaches $1/p$ at the exponential rate $p^{-n/2}$, and the probability you asked for approaches $(p-1)/(2p)$ at the same rate. As noted above, this result, and indeed the exact probability, can be obtained by elementary means, yielding a (known but not well-known) alternative proof of Quadratic Reciprocity(!). But that's probably too far afield for the present purpose. - From your answer I infer 0 does not count as a quadratic residue mod p. (I definitely agree it is not as interesting as a quadratic residue, but should it be really ostracized from the set of squares?) Gerhard "Supports The Rights of Zero" Paseman, 2011.07.05 – Gerhard Paseman Jul 5 2011 at 23:12 4 While 0 is indeed a square, it does not count as a "quadratic residue" (nor as a "quadratic nonresidue" — though this term should really have been "nonquadratic residue"). Properly 0 should count as half square and half non-square (think about the number of square roots), and then the limit would really be 1/2. – Noam D. Elkies Jul 5 2011 at 23:55 5 For quadratic reciprocity: let $n$ be an odd prime prime $l \neq p$, and $N$ the number of solutions of $\sum_{i=1}^l a_i^2 = 1 \bmod p$. Cyclic permutation of the coordinates has two fixed points or none according as $N \equiv 2$ or $0 \bmod l$. From the formula for $N$ it soon follows that $(l/p) \equiv {p^*}^{(l-1)/2} \bmod l$ where $p^* = \gamma(1)^2 = p$ or $-p$ according as $p \equiv 1$ or $-1 \bmod 4$. By Legendre's formula this means $(l/p) = (p^*/l)$. Exercise: modify this to determine $(2/p)$ from the count of solutions of $x_1^2 + x_2^2 \equiv 1 \bmod p$. [Original source?] – Noam D. Elkies Jul 6 2011 at 0:01 1 :-) If no reference turns up here I'll post a reference request as a question... It worked quite well for my one previous M.O. query (on the integral for $\frac{22}{7} - \pi$). – Noam D. Elkies Jul 6 2011 at 3:17 5 This idea can also be used to get at a few memorable cases of higher reciprocity. For example: if $p \equiv 1 \bmod 3$, and you already know that the number of solutions of $x^3 + y^3 = 1 \bmod p$ is $p - 2 - a$ where $4p = a^2 + 27 b^2$ for some integers $a,b$, then it follows at once that $2$ is a cubic residue iff $a$ is even, that is, iff $p$ itself can be written as $a^2 + 27b^2$. [NB the count is $p-2-a$, not the familiar $p+1-a$, because for this purpose we must exclude the three points at infinity.] – Noam D. Elkies Jul 6 2011 at 14:41 show 6 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The probability depends on the parity of $n$ and the residue of $p$ modulo $4$: it can be calculated in a straightforward way using Gauss sums. Let $n$ be $2k$ or $2k+1$, and let $p\equiv r\pmod{4}$ where $r=\pm 1$. Then, assuming I made no mistake, the probability equals $$\frac{p+1}{2p}+\frac{p-1}{2p}(rp)^{-k}.$$ Note that in my calculation I regarded zero as a quadratic residue. If we exclude zero then the final answer will look slightly different, with a main term $\frac{p-1}{2p}$ as Noam Elkies said. - Hah! A fellow Zero supporter. Welcome, Zero-brother! Gerhard "Sound The Drums And Cannon" Paseman, 2011.07.05 – Gerhard Paseman Jul 5 2011 at 23:59 .5 Let me atone for giving too few details by giving too many. Let $$S=\sum_{a_1=0}^{p-1}\dots\sum_{a_n=0}^{p-1}\sum_{t=0}^{p-1}\sum_{m=0}^{p-1}e^{2\pi im(a_1^2+\cdots+a_n^2-t^2)/p}$$ The innermost sum is $p$ if $a_1^2+\cdots+a_n^2-t^2\equiv0\pmod p$ and zero otherwise, so $S$ counts $2p$ whenever $a_1^2+\cdots+a_n^2$ is a (nonzero) quadratic residue, $p$ whenever it's zero. On the other hand, $$S=\sum_{m=0}^{p-1}\sum_{a_1=0}^{p-1}\dots\sum_{a_n=0}^{p-1}\sum_{t=0}^{p-1}e^{2\pi im(a_1^2+\cdots+a_n^2-t^2)/p}$$ so $$S=p^{n+1}+\sum_{m=1}^{p-1}\sum_{a_1=0}^{p-1}\dots\sum_{a_n=0}^{p-1}\sum_{t=0}^{p-1}e^{2\pi im(a_1^2+\cdots+a_n^2-t^2)/p}$$ so $$S=p^{n+1}+\sum_{m=1}^{p-1}\left(\left(\sum_{a_1=0}^{p-1}e^{2\pi ima_1^2/p}\right)\cdots\left(\sum_{a_n=0}^{p-1}e^{2\pi ima_n^2/p}\right)\left(\sum_{t=0}^{p-1}e^{2\pi imt^2/p}\right)\right)$$ Each of those inner sums is a Gauss sum and known to equal $\sqrt p$ in modulus (more detail: the sum is ${m\overwithdelims()p}\sqrt{{-1\overwithdelims()p}p}$), so $|S-p^{n+1}|\le(p-1)p^{(n+1)/2}$. For $n\gt1$, the main term beats the error term, and you get a good estimate. - 1 If you're going for the shortest answer, I recommend changing character sets. Gerhard "Brevity Is The Soul" Paseman, 2011.07.05 – Gerhard Paseman Jul 5 2011 at 22:48 Also $\frac12$ is not quite right if $p$ is fixed and only $n$ grows. – Noam D. Elkies Jul 5 2011 at 23:00 Or "Levity is the role of wit"... – Noam D. Elkies Jul 5 2011 at 23:57 The answer is worth the question. +1 – Wadim Zudilin Jul 6 2011 at 2:55 Here is a slightly different argument: Let $Q$ be a non degenerate quadratic form over $\mathbb{F}_q$ of rang $n$ and determinant $d$. Let $A(n,d)=|{x\in \mathbb{F}_q^n:Q(x)=0}|$. The claim is that $A(n,d)=q^{n-1}+O(q^{n/2})$. For $n>2$ we can write $Q(X)=Q_0(X_1,\ldots,X_{n-2}) +X_{n-1}X_n$, where $Q_0$ is a form of rank $n-2$ in the variables $X_1,\ldots,X_{n-2}$. This decomposition shows instantly that that $A(n,d)=(2 q-1) A(-d,n-2) +(q-1) (q^{n-2}-A(-d,n-2))$. Proceeding by induction we get the estimate $A(n,d)=q^{n-1}+O(q^{n/2})$. (The error term can be computed exactly using Gauss sums). Applying this to the forms $X_1^2+\cdots +X_n^2-X_{n+1}^2$ and $X_1^2+\cdots+ X_n^2$ we get that the desired probability is $(A(n+1,-1)-A(n,1))/(2 q^n)= (q-1)/(2q) +O(q^{-n/2})$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 97, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231135249137878, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Mole_fraction
# Mole fraction In chemistry, the mole fraction $x_i$ is defined as the amount of a constituent $n_i$ divided by the total amount of all constituents in a mixture $n_{tot}$:[1] $x_i = \frac{n_i}{n_{tot}}$ The sum of all the mole fractions is equal to 1: $\sum_{i=1}^{N} n_i = n_{tot} ; \; \sum_{i=1}^{N} x_i = 1$ The mole fraction is also called the amount fraction.[1] It is identical to the number fraction, which is defined as the number of molecules of a constituent $N_i$ divided by the total number of all molecules $N_{tot}$. It is one way of expressing the composition of a mixture with a dimensionless quantity (mass fraction is another). The mole fraction is sometimes denoted by the lowercase Greek letter $χ$ (chi) instead of a Roman $x$.[2][3] For mixtures of gases, IUPAC recommends the letter $y$.[1] ## Properties Mole fraction is used very frequently in the construction of phase diagrams. It has a number of advantages: • it is not temperature dependent (such as molar concentration) and does not require knowledge of the densities of the phase(s) involved • a mixture of known mole fraction can be prepared by weighing off the appropriate masses of the constituents • the measure is symmetric: in the mole fractions x=0.1 and x=0.9, the roles of 'solvent' and 'solute' are reversed. • In a mixture of ideal gases, the mole fraction can be expressed as the ratio of partial pressure to total pressure of the mixture. ## Related quantities ### Mass fraction The mass fraction $w_i$ can be calculated using the formula $w_i = x_i \cdot \frac {M_i}{M}$ where $M_i$ is the molar mass of the component $i$ and $M$ is the average molar mass of the mixture. Replacing the expression of the molar mass: $w_i = x_i \cdot \frac {M_i}{\sum_i x_i M_i}$ ### Mole percentage Multiplying mole fraction by 100 gives the mole percentage, also referred as amount/amount percent (abbreviated as n/n%). ### Mass concentration The conversion to and from mass concentration $\rho_i$ is given by: $x_i = \frac{\rho_i}{\rho} \cdot \frac{M}{M_i}$ where $M$ is the average molar mass of the mixture. ### Molar concentration The conversion to molar concentration $c_i$ is given by: $c_i = \frac{{x_i \cdot \rho}}{{M}} = x_i c$ or $c_i = \frac{{x_i \cdot \rho}}{{\sum_i x_i M_i}}$ where $M$ is the average molar mass of the solution, c total molar concentration and $\rho$ is the density of the solution . ### Mass and molar mass The mole fraction can be calculated from the masses $m_i$ and molar masses $M_i$ of the components: $x_i= \frac{{\frac{{m_i}}{{M_i}}}}{{\sum_i \frac{{m_i}}{{M_i}}}}$ ## Spatial variation and gradient In a spatially non-uniform mixture, the mole fraction gradient triggers the phenomenon of diffusion. ## References 1. ^ a b c IUPAC, , 2nd ed. (the "Gold Book") (1997). Online corrected version:  (2006–) "amount fraction". 2. Zumdahl, Steven S. (2008). Chemistry (8th ed. ed.). Cengage Learning. p. 201. ISBN 0-547-12532-1. 3. Rickard, James N. Spencer, George M. Bodner, Lyman H. (2010). Chemistry : structure and dynamics. (5th ed. ed.). Hoboken, N.J.: Wiley. p. 357. ISBN 978-0-470-58711-9.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8650503158569336, "perplexity_flag": "middle"}
http://nrich.maths.org/9096
# Some Thoughts on Teaching about Area ##### Stage: 3, 4 and 5 Article by Alison Kiddle One of the most common problems that arises when secondary school students work on problems about area is the confusion between area and perimeter. Looking at a common textbook treatment of area and permiter suggests why this confusion might arise: many textbooks introduce perimeter and area separately, but on consecutive pages of the book, using exactly the same diagrams and examples, often looking at areas of rectangles. A child takes away from this a formula for area (length multiplied by width) and a formula for perimeter ($2l + 2w$) having worked mainly with rectangles. Perhaps a better approach would be to work with the concepts of area and perimeter together. By looking conceptually at the two quantities, it becomes clearer that they measure two different properties of a shape, and the names 'area' and 'perimeter' can be hung onto the concepts once the student has a need of names to refer to them. Take a look at the problem Warmsnug Double Glazing. A student does not need a formula for calculating area or perimeter in order to appreciate that some of the windows need more glass than others, and some need more frame than others. To solve the problem, the concepts of area and perimeter need to be present, but the names do not. Another common misconception among students with a shaky grasp of area and perimeter is the notion that the two increase together, that is, the greater the area the greater the perimeter and vice versa. Changing Areas, Changing Perimeters directly challenges that notion by provoking students to think about the two quantities together and to vary one while keeping the other the same. Area is an important concept for older students too. When learning about calculus, the idea of integration as area becomes very useful for applications. Many students have a superficial understanding of differentiation based on applying a rule ($x^n$ differentiates to give $nx^{n-1}$) and then learn integration as the inverse of differentiation. In the same way that learning about area and perimeter separately, as rules to be memorised, leads to students confusing the two concepts, many markers of A level papers will have seen students integrating rather than differentiating, or vice versa, because the students are following a rule rather than showing their understanding of a concept. Integration Matcher requires no understanding of integration or differentiation, as the cards can be handed out with the explanation that some of the graphs show the area under the curve of some of the other graphs, and students can be invited to match them. This helps students to develop a conceptual understanding of integration as area that can then be applied to problems. Of course, the same set of cards could be used to encourage students to explore graphs of the gradient as a precursor to differentiation! Research shows that conceptual understanding leads to students retaining knowledge much better than rote learning of formulae. We hope that working on some of the tasks identified above will help students to develop the important concepts that will lead to better understanding as well as examination success! The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9566295742988586, "perplexity_flag": "middle"}
http://terrytao.wordpress.com/2012/11/14/expanding-polynomials-over-finite-fields-of-large-characteristic-and-a-regularity-lemma-for-definable-sets/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definable sets 14 November, 2012 in math.AG, math.CO, paper | Tags: algebraic regularity lemma, definability, etale fundamental group, expanders, polynomials, regularity lemma, van Kampen's theorem I’ve just uploaded to the arXiv my paper “Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definable sets“, submitted to Contrib. Disc. Math. The motivation of this paper is to understand a certain polynomial variant of the sum-product phenomenon in finite fields. This phenomenon asserts that if ${A}$ is a non-empty subset of a finite field ${F}$, then either the sumset ${A+A := \{a+b: a,b \in A\}}$ or product set ${A \cdot A := \{ab: a,b \in A \}}$ will be significantly larger than ${A}$, unless ${A}$ is close to a subfield of ${F}$ (or to ${\{1\}}$). In particular, in the regime when ${A}$ is large, say ${|F|^{1-c} < |A| \leq |F|}$, one expects an expansion bound of the form $\displaystyle |A+A| + |A \cdot A| \gg (|F|/|A|)^{c'} |A| \ \ \ \ \ (1)$ for some absolute constants ${c, c' > 0}$. Results of this type are known; for instance, Hart, Iosevich, and Solymosi obtained precisely this bound for ${(c,c')=(3/10,1/3)}$ (in the case when ${|F|}$ is prime), which was then improved by Garaev to ${(c,c')=(1/3,1/2)}$. We have focused here on the case when ${A}$ is a large subset of ${F}$, but sum-product estimates are also extremely interesting in the opposite regime in which ${A}$ is allowed to be small (see for instance the papers of Katz-Shen and Li and of Garaev for some recent work in this case, building on some older papers of Bourgain, Katz and myself and of Bourgain, Glibichuk, and Konyagin). However, the techniques used in these two regimes are rather different. For large subsets of ${F}$, it is often profitable to use techniques such as the Fourier transform or the Cauchy-Schwarz inequality to “complete” a sum over a large set (such as ${A}$) into a set over the entire field ${F}$, and then to use identities concerning complete sums (such as the Weil bound on complete exponential sums over a finite field). For small subsets of ${F}$, such techniques are usually quite inefficient, and one has to proceed by somewhat different combinatorial methods which do not try to exploit the ambient field ${F}$. But my paper focuses exclusively on the large ${A}$ regime, and unfortunately does not directly say much (except through reasoning by analogy) about the small ${A}$ case. Note that it is necessary to have both ${A+A}$ and ${A \cdot A}$ appear on the left-hand side of (1). Indeed, if one just has the sumset ${A+A}$, then one can set ${A}$ to be a long arithmetic progression to give counterexamples to (1). Similarly, if one just has a product set ${A \cdot A}$, then one can set ${A}$ to be a long geometric progression. The sum-product phenomenon can then be viewed that it is not possible to simultaneously behave like a long arithmetic progression and a long geometric progression, unless one is already very close to behaving like a subfield. Now we consider a polynomial variant of the sum-product phenomenon, where we consider a polynomial image $\displaystyle P(A,A) := \{ P(a,b): a,b \in A \}$ of a set ${A \subset F}$ with respect to a polynomial ${P: F \times F \rightarrow F}$; we can also consider the asymmetric setting of the image $\displaystyle P(A,B) := \{ P(a,b): a \in A,b \in B \}$ of two subsets ${A,B \subset F}$. The regime we will be interested is the one where the field ${F}$ is large, and the subsets ${A, B}$ of ${F}$ are also large, but the polynomial ${P}$ has bounded degree. Actually, for technical reasons it will not be enough for us to assume that ${F}$ has large cardinality; we will also need to assume that ${F}$ has large characteristic. (The two concepts are synonymous for fields of prime order, but not in general; for instance, the field with ${2^n}$ elements becomes large as ${n \rightarrow \infty}$ while the characteristic remains fixed at ${2}$, and is thus not going to be covered by the results in this paper.) In this paper of Vu, it was shown that one could replace ${A \cdot A}$ with ${P(A,A)}$ in (1), thus obtaining a bound of the form $\displaystyle |A+A| + |P(A,A)| \gg (|F|/|A|)^{c'} |A|$ whenever ${|A| \geq |F|^{1-c}}$ for some absolute constants ${c, c' > 0}$, unless the polynomial ${P}$ had the degenerate form ${P(x,y) = Q(L(x,y))}$ for some linear function ${L: F \times F \rightarrow F}$ and polynomial ${Q: F \rightarrow F}$, in which ${P(A,A)}$ behaves too much like ${A+A}$ to get reasonable expansion. In this paper, we focus instead on the question of bounding ${P(A,A)}$ alone. In particular, one can ask to classify the polynomials ${P}$ for which one has the weak expansion property $\displaystyle |P(A,A)| \gg (|F|/|A|)^{c'} |A|$ whenever ${|A| \geq |F|^{1-c}}$ for some absolute constants ${c, c' > 0}$. One can also ask for stronger versions of this expander property, such as the moderate expansion property $\displaystyle |P(A,A)| \gg |F|$ whenever ${|A| \geq |F|^{1-c}}$, or the almost strong expansion property $\displaystyle |P(A,A)| \geq |F| - O( |F|^{1-c'})$ whenever ${|A| \geq |F|^{1-c}}$. (One can consider even stronger expansion properties, such as the strong expansion property ${|P(A,A)| \geq |F|-O(1)}$, but it was shown by Gyarmati and Sarkozy that this property cannot hold for polynomials of two variables of bounded degree when ${|F| \rightarrow \infty}$.) One can also consider asymmetric versions of these properties, in which one obtains lower bounds on ${|P(A,B)|}$ rather than ${|P(A,A)|}$. The example of a long arithmetic or geometric progression shows that the polynomials ${P(x,y) = x+y}$ or ${P(x,y) = xy}$ cannot be expanders in any of the above senses, and a similar construction also shows that polynomials of the form ${P(x,y) = Q(f(x)+f(y))}$ or ${P(x,y) = Q(f(x) f(y))}$ for some polynomials ${Q, f: F \rightarrow F}$ cannot be expanders. On the other hand, there are a number of results in the literature establishing expansion for various polynomials in two or more variables that are not of this degenerate form (in part because such results are related to incidence geometry questions in finite fields, such as the finite field version of the Erdos distinct distances problem). For instance, Solymosi established weak expansion for polynomials of the form ${P(x,y) = f(x)+y}$ when ${f}$ is a nonlinear polynomial, with generalisations by Hart, Li, and Shen for various polynomials of the form ${P(x,y) = f(x) + g(y)}$ or ${P(x,y) = f(x) g(y)}$. Further examples of expanding polynomials appear in the work of Shkredov, Iosevich-Rudnev, and Bukh-Tsimerman, as well as the previously mentioned paper of Vu and of Hart-Li-Shen, and these papers in turn cite many further results which are in the spirit of the polynomial expansion bounds discussed here (for instance, dealing with the small ${A}$ regime, or working in other fields such as ${{\bf R}}$ instead of in finite fields ${F}$). We will not summarise all these results here; they are summarised briefly in my paper, and in more detail in the papers of Hart-Li-Shen and of Bukh-Tsimerman. But we will single out one of the results of Bukh-Tsimerman, which is one of most recent and general of these results, and closest to the results of my own paper. Roughly speaking, in this paper it is shown that a polynomial ${P(x,y)}$ of two variables and bounded degree will be a moderate expander if it is non-composite (in the sense that it does not take the form ${P(x,y) = Q(R(x,y))}$ for some non-linear polynomial ${Q}$ and some polynomial ${R}$, possibly having coefficients in the algebraic completion of ${F}$) and is monic on both ${x}$ and ${y}$, thus it takes the form ${P(x,y) = x^d + S(x,y)}$ for some ${d \geq 1}$ and some polynomial ${S}$ of degree at most ${d-1}$ in ${x}$, and similarly with the roles of ${x}$ and ${y}$ reversed, unless ${P}$ is of the form ${P(x,y) = f(x) + g(y)}$ or ${P(x,y) = f(x) g(y)}$ (in which case the expansion theory is covered to a large extent by the previous work of Hart, Li, and Shen). Our first main result improves upon the Bukh-Tsimerman result by strengthening the notion of expansion and removing the non-composite and monic hypotheses, but imposes a condition of large characteristic. I’ll state the result here slightly informally as follows: Theorem 1 (Criterion for moderate expansion) Let ${P: F \times F \rightarrow F}$ be a polynomial of bounded degree over a finite field ${F}$ of sufficiently large characteristic, and suppose that ${P}$ is not of the form ${P(x,y) = Q(f(x)+g(y))}$ or ${P(x,y) = Q(f(x)g(y))}$ for some polynomials ${Q,f,g: F \rightarrow F}$. Then one has the (asymmetric) moderate expansion property $\displaystyle |P(A,B)| \gg |F|$ whenever ${|A| |B| \ggg |F|^{2-1/8}}$. This is basically a sharp necessary and sufficient condition for asymmetric expansion moderate for polynomials of two variables. In the paper, analogous sufficient conditions for weak or almost strong expansion are also given, although these are not quite as satisfactory (particularly the conditions for almost strong expansion, which include a somewhat complicated algebraic condition which is not easy to check, and which I would like to simplify further, but was unable to). The argument here resembles the Bukh-Tsimerman argument in many ways. One can view the result as an assertion about the expansion properties of the graph ${\{ (a,b,P(a,b)): a,b \in F \}}$, which can essentially be thought of as a somewhat sparse three-uniform hypergraph on ${F}$. Being sparse, it is difficult to directly apply techniques from dense graph or hypergraph theory for this situation; however, after a few applications of the Cauchy-Schwarz inequality, it turns out (as observed by Bukh and Tsimerman) that one can essentially convert the problem to one about the expansion properties of the set $\displaystyle \{ (P(a,c), P(b,c), P(a,d), P(b,d)): a,b,c,d \in F \} \ \ \ \ \ (2)$ (actually, one should view this as a multiset, but let us ignore this technicality) which one expects to be a dense set in ${F^4}$, except in the case when the associated algebraic variety $\displaystyle \{ (P(a,c), P(b,c), P(a,d), P(b,d)): a,b,c,d \in \overline{F} \}$ fails to be Zariski dense, but it turns out that in this case one can use some differential geometry and Riemann surface arguments (after first invoking the Lefschetz principle and the high characteristic hypothesis to work over the complex numbers instead over a finite field) to show that ${P}$ is of the form ${Q(f(x)+g(y))}$ or ${Q(f(x)g(y))}$. This reduction is related to the classical fact that the only one-dimensional algebraic groups over the complex numbers are the additive group ${({\bf C},+)}$, the multiplicative group ${({\bf C} \backslash \{0\},\times)}$, or the elliptic curves (but the latter have a group law given by rational functions rather than polynomials, and so ultimately end up being eliminated from consideration, though they would play an important role if one wanted to study the expansion properties of rational functions). It remains to understand the structure of the set (2) is. To understand dense graphs or hypergraphs, one of the standard tools of choice is the Szemerédi regularity lemma, which carves up such graphs into a bounded number of cells, with the graph behaving pseudorandomly on most pairs of cells. However, the bounds in this lemma are notoriously poor (the regularity obtained is an inverse tower exponential function of the number of cells), and this makes this lemma unsuitable for the type of expansion properties we seek (in which we want to deal with sets ${A}$ which have a polynomial sparsity, e.g. ${|A| \sim |F|^{1-c}}$). Fortunately, in the case of sets such as (2) which are definable over the language of rings, it turns out that a much stronger regularity lemma is available, which I call the “algebraic regularity lemma”. I’ll state it (again, slightly informally) in the context of graphs as follows: Lemma 2 (Algebraic regularity lemma) Let ${F}$ be a finite field of large characteristic, and let ${V, W}$ be definable sets over ${F}$ of bounded complexity (i.e. ${V, W}$ are subsets of ${F^n}$, ${F^m}$ for some bounded ${n,m}$ that can be described by some first-order predicate in the language of rings of bounded length and involving boundedly many constants). Let ${E}$ be a definable subset of ${V \times W}$, again of bounded complexity (one can view ${E}$ as a bipartite graph connecting ${V}$ and ${W}$). Then one can partition ${V, W}$ into a bounded number of cells ${V_1,\ldots,V_a}$, ${W_1,\ldots,W_b}$, still definable with bounded complexity, such that for all pairs ${i =1,\ldots a}$, ${j=1,\ldots,b}$, one has the regularity property $\displaystyle |E \cap (A \times B)| = d_{ij} |A| |B| + O( |F|^{-1/4} |V| |W| )$ for all ${A \subset V_i, B \subset W_i}$, where ${d_{ij} := \frac{|E \cap (V_i \times W_j)|}{|V_i| |W_j|}}$ is the density of ${E}$ in ${V_i \times W_j}$. This lemma resembles the Szemerédi regularity lemma, but regularises all pairs of cells (not just most pairs), and the regularity is of polynomial strength in ${|F|}$, rather than inverse tower exponential in the number of cells. Also, the cells are not arbitrary subsets of ${V,W}$, but are themselves definable with bounded complexity, which turns out to be crucial for applications. I am optimistic that this lemma will be useful not just for studying expanding polynomials, but for many other combinatorial questions involving dense subsets of definable sets over finite fields. The above lemma is stated for graphs ${E \subset V \times W}$, but one can iterate it to obtain an analogous regularisation of hypergraphs ${E \subset V_1 \times \ldots \times V_k}$ for any bounded ${k}$ (for application to (2), we need ${k=4}$). This hypergraph regularity lemma, by the way, is not analogous to the strong hypergraph regularity lemmas of Rodl et al. and Gowers developed in the last six or so years, but closer in spirit to the older (but weaker) hypergraph regularity lemma of Chung which gives the same “order ${1}$” regularity that the graph regularity lemma gives, rather than higher order regularity. One feature of the proof of Lemma 2 which I found striking was the need to use some fairly high powered technology from algebraic geometry, and in particular the Lang-Weil bound on counting points in varieties over a finite field (discussed in this previous blog post), and also the theory of the etale fundamental group. Let me try to briefly explain why this is the case. A model example of a definable set of bounded complexity ${E}$ is a set ${E \subset F^n \times F^m}$ of the form $\displaystyle E = \{ (x,y) \in F^n \times F^m: \exists t \in F; P(x,y,t)=0 \}$ for some polynomial ${P: F^n \times F^m \times F \rightarrow F}$. (Actually, it turns out that one can essentially write all definable sets as an intersection of sets of this form; see this previous blog post for more discussion.) To regularise the set ${E}$, it is convenient to square the adjacency matrix, which soon leads to the study of counting functions such as $\displaystyle \mu(x,x') := | \{ (y,t,t') \in F^m \times F \times F: P(x,y,t) = P(x',y,t') = 0 \}|.$ If one can show that this function ${\mu}$ is “approximately finite rank” in the sense that (modulo lower order errors, of size ${O(|F|^{-1/2})}$ smaller than the main term), this quantity depends only on a bounded number of bits of information about ${x}$ and a bounded number of bits of information about ${x'}$, then a little bit of linear algebra will then give the required regularity result. One can recognise ${\mu(x,x')}$ as counting ${F}$-points of a certain algebraic variety $\displaystyle V_{x,x'} := \{ (y,t,t') \in \overline{F}^m \times \overline{F} \times \overline{F}: P(x,y,t) = P(x',y,t') = 0 \}.$ The Lang-Weil bound (discussed in this previous post) provides a formula for this count, in terms of the number ${c(x,x')}$ of geometrically irreducible components of ${V_{x,x'}}$ that are defined over ${F}$ (or equivalently, are invariant with respect to the Frobenius endomorphism associated to ${F}$). So the problem boils down to ensuring that this quantity ${c(x,x')}$ is “generically bounded rank”, in the sense that for generic ${x,x'}$, its value depends only on a bounded number of bits of ${x}$ and a bounded number of bits of ${x'}$. Here is where the étale fundamental group comes in. One can view ${V_{x,x'}}$ as a fibre product ${V_x \times_{\overline{F}^m} V_{x'}}$ of the varieties $\displaystyle V_x := \{ (y,t) \in \overline{F}^m \times \overline{F}: P(x,y,t) = 0 \}$ and $\displaystyle V_{x'} := \{ (y,t) \in \overline{F}^m \times \overline{F}: P(x',y,t) = 0 \}$ over ${\overline{F}^m}$. If one is in sufficiently high characteristic (or even better, in zero characteristic, which one can reduce to by an ultraproduct (or nonstandard analysis) construction, similar to that discussed in this previous post), the varieties ${V_x,V_{x'}}$ are generically finite étale covers of ${\overline{F}^m}$, and the fibre product ${V_x \times_{\overline{F}^m} V_{x'}}$ is then also generically a finite étale cover. One can count the components of a finite étale cover of a connected variety by counting the number of orbits of the étale fundamental group acting on a fibre of that variety (much as the number of components of a cover of a connected manifold is the number of orbits of the topological fundamental group acting on that fibre). So if one understands the étale fundamental group of a certain generic subset of ${\overline{F}^m}$ (formed by intersecting together an ${x}$-dependent generic subset of ${\overline{F}^m}$ with an ${x'}$-dependent generic subset), this in principle controls ${c(x,x')}$. It turns out that one can decouple the ${x}$ and ${x'}$ dependence of this fundamental group by using an étale version of the van Kampen theorem for the fundamental group, which I discussed in this previous blog post. With this fact (and another deep fact about the étale fundamental group in zero characteristic, namely that it is topologically finitely generated), one can obtain the desired generic bounded rank property of ${c(x,x')}$, which gives the regularity lemma. In order to expedite the deployment of all this algebraic geometry (as well as some Riemann surface theory), it is convenient to use the formalism of nonstandard analysis (or the ultraproduct construction), which among other things can convert quantitative, finitary problems in large characteristic into equivalent qualitative, infinitary problems in zero characteristic (in the spirit of this blog post). This allows one to use several tools from those fields as “black boxes”; not just the theory of étale fundamental groups (which are considerably simpler and more favorable in characteristic zero than they are in positive characteristic), but also some results limiting the morphisms between compact Riemann surfaces of high genus (such as the de Franchis theorem, the Riemann-Hurwitz formula, or the fact that all morphisms between elliptic curves are essentially group homomorphisms), which would be quite unwieldy to utilise if one did not first pass to the zero characteristic case (and thence to the complex case) via the ultraproduct construction (followed by the Lefschetz principle). I found this project to be particularly educational for me, as it forced me to wander outside of my usual range by quite a bit in order to pick up the tools from algebraic geometry and Riemann surfaces that I needed (in particular, I read through several chapters of EGA and SGA for the first time). This did however put me in the slightly unnerving position of having to use results (such as the Riemann existence theorem) whose proofs I have not fully verified for myself, but which are easy to find in the literature, and widely accepted in the field. I suppose this type of dependence on results in the literature is more common in the more structured fields of mathematics than it is in analysis, which by its nature has fewer reusable black boxes, and so key tools often need to be rederived and modified for each new application. (This distinction is discussed further in this article of Gowers.) Recent Comments Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue… Luqing Ye on 245A, Notes 2: The Lebesgue… 14 comments Fascinating! Useful? 14 November, 2012 at 10:23 am Emmanuel Kowalski The last link seems to be wrong… I think that maybe, instead of passing to characteristic 0, one should in many cases here be able to argue that one works with tame covers and fundamental groups. Even in positive characteristic, these behave like the characteristic zero picture. Thanks, I’ve corrected the link now. I agree that most of the arguments in the paper should also extend to the positive characteristic case (though one may still need some effective lower bound on that characteristic in terms of the degrees of the various varieties and morphisms involved, for instance to keep the varieties generically smooth and to stay in the prime-to-p component of the etale fundamental group). My intuition on these topics deteriorates quite rapidly though, once I don’t get to use the Lefschetz principle to run to the complex algebraic geometry setting. [...] Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definab…. [...] Impressive breadth of tools and references! A tiny nitpick: in the paper, there are some missing letters and accents in the tricky ref 27 (the others seem fine). According to the french wikipedia, the TeX should rather read something like: \bibitem{sga2} A. Grothendieck, M. Raynaud, (Y. Laszlo, ed.), Cohomologie locale des faisceaux coh\’erents et th\’eor\`emes de Lefschetz locaux et globaux (SGA 2). Documents Math\’ematiques, 4, Soci\’et\’e Math\’ematique de France, Paris (2005). [New edition of the 1968 original]. [Thanks, this will be corrected in the next revision of the ms - T.] For what it is worth, my paper on SL_2 included a proof that a function P(x,y) was expanding in your sense for any sets (large or small). This was then used in a crucial step (since P(x,y) appeared as a trace of a product of elements of SL_2). Of course, P wasn’t quite a polynomial – rather, it was a polynomial on x, y, x^{-1} and y^{-1}. I’ve remained interested in the question, though – for which polynomials (on x and y, or on x, y, x^{-1} and y^{-1}) can we prove expansion even for very small subsets of finite fields? Thanks, Harald, for pointing that out! (Though, strictly speaking, your examples (say, Proposition 3.3 of http://arxiv.org/pdf/math/0509024.pdf for sake of concreteness) are actually polynomial functions of the products $x_1 \ldots x_{20}, y_1 \ldots y_{20}$ and their inverses, and they don’t quite give expansion in the senses I state above because epsilon is not required to depend linearly on delta in the limit $\delta \to 0$, but it is certainly in the same spirit.) The methods in my paper (based on regularity lemmas) only work for very large sets (of size $|F|^{1-1/16}$ or larger, basically). For extremely small sets (of size less than $\log |F|$) there should be some sort of Lefschetz principle that allows one to embed the configuration into the complex field, where the work of Elekes and Szabo gives satisfactory results. It seems reasonable to conjecture that the Elekes-Szabo theory extends to sets of cardinality up to $|F|^c$ for some absolute constant c (for fields of prime order at least, to avoid subfield obstructions), but then there is presumably some crossover to the large set case when $P(A,A)$ begins to have positive density in F. Terry, Prop. 3.3 of http://arxiv.org/pdf/math/0509024.pdf works uniformly for small and medium-sized sets, which are (in my view) the hardest cases; I didn’t optimize things for large sets because other (easier) methods worked for them in the context I was working in. I gave these matters some thoughts a few years ago (I think Akshay and I talked about them). I would be pretty impressed if the transference arguments that work for very small sets could be extended to size $|F|^\delta$, $\delta>0$ a small constant. I would imagine there would still be a large gap between small and large sets even in this case, and that would be the main challenge. My intuition is that there should be large-set methods that work for all $|A|\geq \sqrt{F}$ (see: Deligne). Don’t you agree? Sorry, I meant to say that the conclusions of the transference argument should extend to the $|F|^\delta$ case (in the prime order case, at least), even if the transference method itself breaks down. (For instance, one could speculatively consider some sort of “approximate embedding” of such smallish sets into characteristic zero which isn’t an exact embedding in the Freiman sense, but is still somehow “good enough” for some vestige of the characteristic zero arguments to be applicable.) For large sets, I agree that $|F|^{1/2}$ is the natural barrier (for weak expansion, at least) since it is the last place where subfield obstructions can occur. (For moderate or strong expansion, there is the possibility of larger counterexamples, e.g. by intersecting together large arithmetic progressions with large geometric progressions, so I don’t have a firm intuition for this case.) My arguments use Deligne’s work (via the Lang-Weil bound) but because of the need to use Cauchy-Schwarz several times to eliminate all the arbitrary sets A, it only starts working at $|F|^{1-1/16}$. I can believe that by being more careful, one could reduce the number of applications of Cauchy-Schwarz to get down to $|F|^{1-1/8}$ or $|F|^{1-1/4}$, but to get all the way down to $|F|^{1-1/2}$ would require a very different idea; it can’t just be using Cauchy-Schwarz to “complete” all sums involving A, followed by Deligne to estimate the completed sums. (In sufficiently “additive” or “multiplicative” situations one can use the relevant Fourier transform as a replacement for Cauchy-Schwarz, and this can get down to the right barrier of $|F|^{1/2}$, but this trick does not appear to be available in the general case.) In any case, I agree that the theories for small, medium, and large sets will initially all be rather different from each other (much as is the case with the Bourgain-Gamburd expansion machinery), though perhaps a unified treatment will eventually emerge (for instance, one may optimistically hope that the small set theory will eventually extend all the way up to $|F|^{1/2}$, and the large set theory down to $|F|^{1/2}$, thus ultimately eliminating the need for a medium set theory). Sorry for asking this question here, but how do you use mathematical symbols between your sentences? Do you use a special program or something?! [The short answer is yes. See the second section of http://terrytao.wordpress.com/about/ - T.] 28 November, 2012 at 7:25 am Derrick Does the result on on asymmetric moderate expanders extend to the case when one considers the image of arbitrary subsets $E$ of $\mathbb F^2$ as opposed to Cartesian products of sets $E=A\times B$? Do you think analogous statements can be made in a geometric measure theory context? Perhaps, define strong expansion to be something along the lines of the image of a set with large fractal dimension contains an interval, moderate expansion says that large fractal dimension implies positive Lebesgue measure, and weak expansion to mean the a set with large fractal dimension would imply that the fractal dimension of the image would be bigger than $(1-c)$ times that fractal dimension of the set plus $c$. For the first question, if one considers images $P(E)$ of arbitrary subsets $E$ of ${\Bbb F}^2$ then one could simply take $E = P^{-1}(C)$ for some arbitrary set $C$, in which case one does not do any better than the easy bound $|P(E)| \gg |E|/|{\Bbb F}|$ (assuming P non-constant of course). For the second question, there are some scattered results of this type. A typical result here is Wolff’s result on the Falconer distance conjecture, that if $E \subset {\bf R}^2$ is a compact set with Hausdorff dimension greater than 4/3, then the distance set $\Delta(E) := \{ |x-y|: x,y \in E \}$ has positive Lebesgue measure; among other things, this (morally, at least) implies that the image of $A^4$ under the polynomial $P(x,y,z,w) := (x-y)^2+(z-w)^2$ has positive Lebesgue measure whenever $A \subset {\bf R}$ is a compact set of dimension at least $1/2$. I don’t know if anyone has studied the problem for arbitrary polynomials P though. (In principle, the results of Elekes and Szabo could be transferable to this setting, but in practice it is often quite difficult to convert a discrete incidence combinatorics result into a continuous geometric measure theory result; for instance, the near-resolution of the Erdos distance problem by Guth and Katz has, to date, not led to further progress on the continuous counterpart, namely the Falconer distance conjecture.) [...] online submission system). This paper is loosely related in subject topic to my two previous papers on polynomial expansion and on recurrence in quasirandom groups (with Vitaly Bergelson), although the methods here are [...] [...] This principle (first laid out in an appendix of Lefschetz’s book), among other things, often allows one to use the methods of complex analysis (e.g. Riemann surface theory) to study many other fields of characteristic zero. There are many variants and extensions of this principle; see for instance this MathOverflow post for some discussion of these. I used this baby version of the Lefschetz principle recently in a paper on expanding polynomial maps. [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 219, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323318004608154, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/134225-inequality-problem.html
# Thread: 1. ## inequality problem The real number c is positive and the real numbers a and b are such that $ab > 0$ and $a > b$. How can I disprove the case $a - c < b + c$ for all such a, b, and c? (The case $\frac{c}{a} < \frac{c}{b}$ works). When I tried it, I can't seem to get numbers that disprove it?? 2. Originally Posted by donnagirl The real number c is positive and the real numbers a and b are such that $ab > 0$ and $a > b$. How can I disprove the case $a - c < b + c$ for all such a, b, and c? (The case $\frac{c}{a} < \frac{c}{b}$ works). When I tried it, I can't seem to get numbers that disprove it?? How hard did you try? How about a= 100, b= 1, c= 1? 3. Hello donnagirl Originally Posted by donnagirl The real number c is positive and the real numbers a and b are such that $ab > 0$ and $a > b$. How can I disprove the case $a - c < b + c$ for all such a, b, and c? (The case $\frac{c}{a} < \frac{c}{b}$ works). When I tried it, I can't seem to get numbers that disprove it?? I'm not quite sure I understand the problem here, because it seems very easy to find a counter-example. In order that $a-c>b+c$, we must ensure that $a-b > 2c$. So, for instance: $a=10, b = 1, c = 3$ gives $a-c=7$ and $b+c = 4$. So $a-c> b+c$. Grandad 4. Thanks again Grandad!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475625157356262, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/33906/the-relationship-between-the-number-of-support-vectors-and-the-number-of-feature
# The relationship between the number of support vectors and the number of features I ran an SVM against a given data set, and made the following observation: If I change the number of features for building the classifier, the number of resulting support vectors will also be changed. I would like to know how to explain this kind of scenario. - What was the type and style of those extra features? Where they look-a-like variants of the extisting features, or some more fresh features that you thought may have extra resolving power? – Philip Oakley Aug 8 '12 at 9:13 This is a document classification problem, and the extra features are just words. I used unigram to build feature space. – user3269 Aug 8 '12 at 13:21 Given @marc's answer, which did the change go, did the number of vectors rise with the number of features, or the reverse. – Philip Oakley Aug 9 '12 at 8:40 @Phillip, my original response was wrong. I think the edited answer is accurate now. – Marc Shivers Aug 10 '12 at 1:13 ## 1 Answer If you look at the optimization problem that SVM solves: $\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + C \sum_{i=1}^n \xi_i \right\}$ s.t. $y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1 - \xi_i, ~~~~\xi_i \ge 0,$ for all $i=1,\dots n$ the support vectors are those $x_i$ where the corresponding $\xi_i \gt 0$. In other words, they are the data points that are either misclassified, or close to the boundary. Now let's compare the solution to this problem when you have a full set of features, to the case where you throw some features away. Throwing a feature away is functionally equivalent to keeping the feature, but adding a contraint $w_j=0$ for the feature $j$ that we want to discard. When you compare these two optimization problems, and work through the math, it turns out there is no hard relationship between the number of features and the number of support vectors. It could go either way. It's useful to think about a simple case. Imagine a 2-dim case where your negative and positive features are clustered around (-1,-1) and (1,1), respectively, and are separable with a diagonal separating hyperplane with 3 support vectors. Now imagine dropping the y-axis feature, so your data in now projected on the x-axis. If the data are still separable, say at x=0, you'd probably be left with only 2 support vectors, one on each side, so adding the y-feature would increases the number of support vectors. However, if the data are no longer separable, you'd get at least one support vector for each point that's on the wrong side of x=0, in which case adding the y-feature would reduce the number of support vectors. So, if this intuition is correct, if you're working in very high-dimensional feature spaces, or using a kernel that maps to a high dimensional feature space, then your data is more likely to be separable, so adding a feature will tend to just add another support vector. Whereas if your data is not currently separable, and you add a feature that significantly improves separability, then you're more likely to see a decrease in the number of support vectors. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478727579116821, "perplexity_flag": "head"}
http://en.wikisource.org/wiki/Attempt_of_a_Theory_of_Electrical_and_Optical_Phenomena_in_Moving_Bodies/Introduction
# Attempt of a Theory of Electrical and Optical Phenomena in Moving Bodies/Introduction From Wikisource by Hendrik Lorentz Introduction Hendrik Lorentz ## Introduction § 1. The question as to whether the aether shares the motion of ponderable bodies or not, has still found no answer that satisfies all physicists. For the decision, primarily the aberration of light and related phenomena could be used, but so far none of the two contested theories, neither that of Fresnel, nor that of Stokes, were fully confirmed with respect to all observations, so concerning the choice between the two views we can only weigh against each other the remaining problems for both of them. By that I was long ago led to believe that with Fresnel's view, i.e. with the assumption of a stationary aether, we are on the right way. While against the view of Stokes there is hardly more than one objection, i.e. the doubt that his assumptions regarding the aether-motion in the vicinity of Earth are contradictory[1], but this objection is of great weight, and I can't see at all how it could be eliminated. The difficulties for Fresnel's theory stem from the known interference experiment of Michelson[2] and, as some think, from the experiments, by which Des Coudres in vain sought to find an influence of Earth's motion on the induction of two circuits[3]. The results of the American scientist, however, allow of an interpretation by an auxiliary hypotheses, and the findings of Des Coudres can easily be explained without such one. Concerning the observations of Fizeau[4] on the rotation of polarization in glass columns, the matter is as follows. At first glance, the result is decidedly against Stokes' view. Yet when I tried to improve Fresnel's theory, the explanation of Fizeau's experiments was not quite successful, so I gradually suspected that this result had been obtained by observational error, or at least it had not met the theoretical considerations which formed the basis of the experiments. And Fizeau was so friendly to tell my colleague van de Sande Bakhuijzen after his request, that at present he himself doesn't see his observations as crucial. In the further course of this work, I will come back in more detail to some of the issues raised at this place. Here I was concerned only with the preliminary justification of the standpoint I have taken. In favor of Fresnel's theory several well-known reasons can be cited. Especially the impossibility of locking the aether between solid or liquid walls. As far as we know, a space devoid of air behaves (in the mechanical sense) like a real vacuum, when ponderable bodies are in motion. When we see how the mercury of a barometer rises to the top when the tube is inclined, or how easily a closed metal shell can be compressed, one can not avoid the idea, that solid and liquid bodies let the aether pass through without hindrance. One hardly will assume, that this medium could suffer a compression, without giving resistance to it. That transparent bodies can move, without communicating their full velocity to the contained aether, was proven by Fizeau's famous interference experiment with streaming water[5]. This experiment, that later was repeated by Michelson and Morley[6] on a larger scale, could impossibly have had the observed success, when everything within the tube would possess a common velocity. By that, only the behavior of nontransparent substances and very extended bodies remains questionable. It should be noted, moreover, that we can imagine the permeability of a body in two ways. First, this property might not be present in individual atoms, yet when the atoms were very small compared to the gaps between them, it might be present in matter of greater extension; but secondly, it may be assumed - and this hypothesis I will use in the following - that ponderable matter is absolutely permeable, namely that at the location of an atom, also the aether exists at the same time, which would be understandable if we were allowed to see the atoms as local modifications of the aether. It is not my intention to enter into such speculations more closely, or to express assumptions about the nature of the aether. I only wish to keep myself as free as possible from preconceived opinions about that substance, and I won't, for example, attribute to it the properties of ordinary liquids and gases. If it is the case, that a representation of the phenomena would succeed best under the condition of absolute permeability, then one should admit of such an assumption for the time being, and leave it to the subsequent research, to give us a deeper understanding. That we cannot speak about an absolute rest of the aether, is self-evident; this expression would not even make sense. When I say for the sake of brevity, that the aether would be at rest, then this only means that one part of this medium does not move against the other one and that all perceptible motions are relative motions of the celestial bodies in relation to the aether. § 2. Since Maxwell's views became more and more accepted, the question of the properties of the aether became highly important also for the theory of elasticity. Strictly speaking, not a single experiment in which a charged body or a current conductor moves, can be handled carefully, if the state of motion of the aether is not considered at the same time. In any phenomenon of electricity, the question arises whether an influence of the earth's motion is to be expected; and regarding the consequences of the latter for optical phenomena, we have to demand from the electro-magnetic theory of light that it can account for the already established facts. Namely, the aberration theory isn't one of those parts of optics, for which treatment the general principles of wave theory are sufficient. Once a telescope comes into play, one can not help but to apply Fresnel's dragging coefficient to the lenses, yet its value can only be derived from special assumptions about the nature of light vibrations. The fact that the electro-magnetic theory of light really leads to that coefficient assumed by Fresnel, was shown by me two years ago[7]. Since then I have greatly simplified the theory and extended it also to the processes involved in reflection and refraction, as well as birefringent bodies[8]. It may be permitted for me, to come back to this matter. To come to the basic equations for the phenomena of electricity in moving bodies, I joined an opinion that has been represented in recent years by several physicists; I have indeed assumed that small electrically charged molecules exist in all bodies, and that all electric processes are based on the location and motion of these "ions". As regards the electrolytes, this view is widely recognized as the only possible one, and Giese[9], Schuster[10], Arrhenius[11], Elster and Geitel[12] have defended the view, that also as regards the electricity conduction in gases, we are dealing with a convection by ions. It seems to me, that nothing prevents us to believe that the molecules of ponderable dielectric bodies contain such particles, which are connected to certain equilibrium positions and are moved only by external electric forces thereof; just herein the "dielectric polarization" of such bodies would consist. The periodically changing polarization, which forms a light ray according to Maxwell's theory, become vibrations of the ions in this conception. It is well known that many researchers, who stood on the basis of the older theory of light, considered the resonance of ponderable matter as the cause of color dispersion, and this explanation can in the main also included into the electro-magnetic theory of light, for which it is only necessary to ascribe to the ions a certain mass. This I have shown in a previous paper[13], in which I admittedly have derived the equations of motion from actions at a distance, and not, what I now consider to be much easier, from Maxwell's expressions. Later, von Helmholtz[14] in his electromagnetic theory of color dispersion started from the same point of view[15]. Giese[16] has applied to various cases the hypothesis, that electricity is connected to ions in metallic conductors as well; but the picture which he gives of the processes in these bodies is at one point substantially different from the idea that we have on the conduction in electrolytes. While the particles of dissolved salt, however often they may be stopped by the water molecules, eventually might travel over large distances, the ions in a copper wire will hardly have such a great mobility. We can however be satisfied with forward and backward motion at molecular distances, if we only assume that one ion often transfers its charge to another, or that two oppositely charged ions, if they meet, or after they were "connected" with one another, exchange their charges against each other. In any case, such processes must take place at the boundary of two bodies, when a current flows from one to the other. If for example $n$ positively charged copper atoms are separated at a copper plate, and we also want for the latter all the electricity be connected to ions, then we have to assume that the charges are transferred to $n$ atoms in the plate, or that $\tfrac{1}{2}n$ of the deposited particles exchange their charges with $\tfrac{1}{2}n$ negatively charged copper atoms, which were already in the electrode. Thus, if the adoption of this transition or exchange of the ionic charges - one of course still very dark process - is the essential complement to any theory that requires an entrainment of electricity by ions, then a persistent electric current never consists of a convection alone, at least not when the centers of two touching or interconnected particles are in some distance $l$ from each other. Then the electricity motion happens without convection over a distance of order $l$, and only if this is very small in proportion to the distance over which a convection takes place, we on the whole are dealing almost exclusively with this latter phenomenon. Giese is of the opinion that in metals a real convection was not at all in play. But since it does not seem possible to include the "jumping" of the charges into the theory, then one would excuse, that for my part I totally disregard such a process, and that I interpret a current in a metal wire simply as a motion of charged particles. Further research will have to decide whether the results of the theory remains at a different view. § 3. The theory of ions was very suitable for my purpose, because it makes it possible to introduce the permeability of the aether in a rather satisfactory way in the equations. Of course, these were decomposed into two groups. First, we have to express as to how the state of the aether by charge, position and motion of the ions is determined; then, secondly, we have to indicate by which forces the aether is acting on the charged particles. In my paper already cited[17] I have derived the formulas by means of d'Alembert's principle from certain assumptions and therefore selected a path, that has much resemblance with Maxwell's application of Lagrange's equations. Now I prefer for the sake of brevity, to introduce the basic equations themselves as hypotheses. The formulas for the aether are in agreement, regarding the space between the ions, with the known equations of Maxwell's theory, and generally express that any change that was caused by an ion in the aether, propagates with the velocity of light. But we regard the force exerted by the aether on a charged particle, as a function of the state of that medium at the point where the particle is located. The adopted fundamental law differs in a major point from the laws, that were introduced by Weber and Clausius. The influence that was suffered by a particle B due to the vicinity of a second one A, indeed depends on the motion of the latter, but not on its instantaneous motion. Much more relevant is the motion of A some time earlier, and the adopted law corresponds to the requirement for the theory of electrodynamics, that was presented by Gauss in 1845 in his known letter to Weber[18] In general, the assumptions that I introduce represent in a certain sense a return to the earlier theories of electricity. The core of Maxwell's views is therefore not lost, but it cannot be denied that with the adoption of ions we are not far away from the electric particles, which were used earlier. In some simple cases, this occurs particularly clear. Since the essence of electric charge is seen by us in the accumulation of positive or negative charged particles, and since the basic formulas for stationary ions give Coulomb's law, therefore, for example, the entire electrostatics can be brought into the earlier form. 1. Lorentz. De l’influence du mouvement de la terre sur les phénomènes lumineux. Arch. néerl., T. 21, p. 103, 1887; Lodge. Aberration problems. London Phil. Trans., Vol. 184. A, p. 727, 1893; Lorentz. De aberratietheorie van Stokes. Zittingsverslagen der Akad. v. Wet. te Amsterdam, 1892—93, p. 97. 2. Michelson. American Journal of Science, 3d. Ser., Vol. 22, p. 120; Vol. 34, p. 333, 1887; Phil. Mag., 5th. Ser., Vol. 24, p. 449, 1887. 3. Des Coudres. Wied. Ann., Bd. 38, p. 71, 1889. 4. Fizeau. Ann. de chim. et de phys., 3e sér., T. 58, p. 129, 1860; Pogg. Ann., Bd. 114, p. 554, 1861. 5. Fizeau. Ann. de chim. et de phys., 3e sér. T. 57, p. 385, 1859; Pogg. Ann., Erg. 3, p. 457, 1853. 6. Michelson and Morley. American Journal of Science, 3d. ser., Vol. 31, p. 377, 1886. 7. Lorentz. La théorie électromagnétique de Maxwell et son application aux corps mouvants. Leide, E. J. Brill, 1892. (Also published in Arch, néerl., T. 25). 8. A preliminary report about that was published in Zittingsverslagen der Akad. v. Wet. te Amsterdam, 1892—93, pp. 28 and 149. 9. Giese. Wied. Ann., Bd. 17, p. 538, 1882. 10. Schuster. Proc. Roy. Soc., Vol. 37, p. 317, 1884. 11. Arrhenius. Wied. Ann., Bd. 32, p. 565, 1887; Bd. 33, p. 638, 1888. 12. Elster and Geitel. Wiener Sitz.-Ber., Bd. 97, Abth. 2, p. 1255, 1888. 13. Lorentz. Over net vorband tusschen de voortplantingssnelheid van het licht en de dichtheid en samenstelling der middenstoffen. Verhandelingen der Akad. van Wet. te Amsterdam, Deel 18, 1878; Wied. Ann., Bd. 9, p. 641, 1880. 14. v. Helmholtz. Wied. Ann., Bd. 48, p. 389, 1893. 15. Also Koláček (Wied. Ann., Bd. 32, pp. 224 and 429, 1887) attempted to explain (albeit in a different manner) dispersion by electrical vibrations in the molecules. Also the theory of Goldhammer (Wied. Ann., Bd. 47, p. 93, 1892) has to be mentioned. 16. Giese. Wied. Ann., Bd. 37, p. 576, 1889. 17. Lorentz. La théorie électromagnétique de Maxwell et son application aux corps mouvants. 18. Gauss. Werke, Bd. 5, p. 629.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432830810546875, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/105955/please-check-the-solution-of-my-ode
# Please Check The Solution of my ODE hey guys just need to know if my differential equation is right the question is b)$$\frac{dy}{dx} = x + 6y.$$ the question is :Find the general solutions to the following differential equations. Sketch at least 4 solution curves for each. the answer that i have found is $$y=\sqrt{\frac{x^2}{6}}$$ please let me know if thats the right answer thanks - – nik Feb 5 '12 at 13:22 ## 4 Answers Your answer seems to be wrong. ($\sqrt{x^2}$ makes little sense as it is) $$y = \frac{(Ce^{6x})}{36} - \frac{x}{6} - \frac{1}{36}$$ You can verify this: $$\frac{dy}{dx} = \frac{(Ce^{6x})}{6} - \frac{1}{6}\cdots\cdots\cdots(A)$$ Now, $$6y = \frac{(Ce^{6x})}{6} - \frac{x}{1} - \frac{1}{6}$$ $$x+6y = \frac{(Ce^{6x})}{6} +x- \frac{x}{1} - \frac{1}{6}$$ $$x+6y = \frac{(Ce^{6x})}{36} -\frac{1}{6} \cdots\cdots\cdots (B)$$ Compare (A) and (B) By putting various values of C, you can plot your graphs. What method did you use to solve the ODE? One way of solving it (Laplace Transform): $$\frac{dy}{dt} = t + 6y.$$ (Just changed variable from x to t) $$L(\frac{dy}{dt}) = L(t) + L(6y)$$ $$sL(y) - f'(0) = \frac{1}{s^2} + 6 L(y)$$ $$(s-6)L(y) = f'(0) + \frac{1}{s^2}$$ $$L(y) = \frac{f'(0) + \frac{1}{s^2}}{s-6}$$ $$y = L^{-1}(\frac{f'(0) + \frac{1}{s^2}}{s-6} )$$ $$y = f'(0)e^{-6t} + \frac{1}{s^2(s-6)}$$ Then you can use partial fractions and solve for the other two terms - i used the separable method that was taught in my class. can you please tell me how to correct it? thanks means a lot – Rohit Pradeep Feb 5 '12 at 12:29 Since this is a homework question, I would highly recommend you post your solution so we can debug it. Giving an answer will not help you get into the IIT :P – Inquest Feb 5 '12 at 12:32 its not a homework solution im geting practice for my test the answer is y=sqrt(x^2/6) i dnt know if that is right – Rohit Pradeep Feb 5 '12 at 12:44 If you have the background for Laplace transform, I have edited my answer with a solution using that. – Inquest Feb 5 '12 at 12:48 can you please tell me how to do it in the separable method? please i will understand it better – Rohit Pradeep Feb 5 '12 at 12:51 show 2 more comments This is First Order Linear Differential Equation so general solution is given by : $$y=\frac{\int u(x) \cdot x \,dx+C}{u(x)} ,\text {where}~~ u(x)=e^{-6\int \,dx}$$ Therefore solution is : $$y=\frac{\int e^{-6x} \cdot x \,dx+C}{e^{-6x}}$$ Integral from the last equation can be solved using Integration by parts . - One of the first methods you learn to solve an equation like this is integrating factors. First let's subtract $6y$ from both sides to get. $y'-6y=x$ The proper integrating factor here is $e^{-6x}$. Multiplying both sides of the equation by this gets us $e^{-6x}y'-6e^{-6x}y=xe^{-6x}$ The left side can be rewritten to obtain $(e^{-6x}y)'=xe^{-6x}$ Can you take it from there? - cnt i do it from the separable method? – Rohit Pradeep Feb 5 '12 at 21:35 @Rohit Not sure how Kannappan's solution works, but the equation as it stands is not separable. I don't see any way to get this equation into the form $\int f(x)dx=\int g(y)dy$ – Mike Feb 6 '12 at 12:30 Here we'll use the variable seperable method: Set $x+6y =t \implies 1+6\dfrac{\mathrm d y}{\mathrm {dx}}=\dfrac{\mathrm d t}{\mathrm{dx}}$. So, the equation transforms to the following: $$\begin{align*}1+6t = \dfrac{\mathrm dt}{\mathrm{dx}}& \implies \dfrac{\mathrm dt}{1+6t}=\mathrm{dx} \end{align*}$$ Now, integrate on both sides to see what you get. If this does not get you to the result, I'll help you a bit more! - ok this is what i had with the same method -6y dy/dx =x integrate both sides i get -6y^2/2=x^2 which is =-3y^2/x^2/2 im kinda lost after this – Rohit Pradeep Feb 5 '12 at 13:18 @Rohit How did you get $-6y \cdot \dfrac{dy}{dx}=x$? – user21436 Feb 5 '12 at 13:24 i tried to use the separable method, so i separated the y from the x. – Rohit Pradeep Feb 6 '12 at 6:37 @Rohit Add a sketch of steps in your question, if you would want to know where you are mistaken. But otherwise, isn't my solution clear? – user21436 Feb 6 '12 at 6:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 17, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279247522354126, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2007/06/14/pushouts-and-pullbacks/?like=1&source=post_flair&_wpnonce=bbf4ba6a50
# The Unapologetic Mathematician ## Pushouts and pullbacks If we have two sets $X$ and $Y$ we can take their union $X\cup Y$: the set of all elements in at least one of the two. We can also characterize this in categorical terms without reference to elements. One thing we can see right off is that $X$ and $Y$ are both subsets of $X\cup Y$. That is, there are arrows $X\rightarrow X\cup Y$ and $Y\rightarrow X\cup Y$. So maybe the union is the coproduct of the two sets. Well, it’s a good guess, but there’s a problem. There may be some elements in both $X$ and $Y$. Let’s say we have one, called $p$. I could take functions $f_X:X\rightarrow Z$ and $f_Y:Y\rightarrow Z$ with $f_X(p)\neq f_Y(p)$. But any function from $X\cup Y$ has to send $p$ to only one point, so we can’t satisfy the coproduct property with the union. In fact the coproduct is the disjoint union, where the two copies of $p$ have different images. We need some way to equalize the two images. And indeed, we have such a way: the coequalizer (you thought I was going to say “equalizer”, right?). The set of points we have the be careful about is the intersection $X\cap Y$. This is a subset of each of $X$ and $Y$, so we have arrows $X\cap Y\rightarrow X$ and $X\cap Y\rightarrow Y$. Now we can take the disjoint union (coproduct) $X\uplus Y$, which comes with arrows $X\rightarrow X\uplus Y$ and $Y\rightarrow X\uplus Y$. We compose these with the two from before to get a pair of arrows $X\cap Y\rightrightarrows X\uplus Y$. Now we take the coequalizer of those arrows. This takes the disjoint union of $X$ and $Y$ and identifies the two copies of each point that came from the intersection, giving the union. Now let’s back up a bit and draw a diagram as usual. We have objects $A$ and $B$, each with an arrow into it from the object $C$. We want to put in an object to complete the commuting square, so that for any other object $D$ and arrows that complete the square there exists a unique arrow from our object to $D$. The object $A\amalg_CB$ (along with its arrows!) is called the “pushout” of the square. We also sometimes label the square “p.o.” to remember that it’s not just any commuting square, but a pushout square. If you’ve followed the discussion of products, coproducts, equalizers, and coequalizers, you should be able to write down a category in which this is a universal object. If whenever we have the left side of this square (the three objects and two arrows) in a category we can find a pushout, then we say the category “has pushouts”. The discussion in the case of sets above generalizes, and we see that if a category has coproducts and has coequalizers then it has pushouts. In this case, the pushout is constructed exactly as we did before. However, it’s possible to have pushouts on their own. Pushouts are closely related to coproducts, as you might guess. In fact, notice that the setup here — the three objects and two arrows — is actually the same thing as two objects in the comma category $(C\downarrow\mathcal{C})$. That is, we start with two arrows from $C$. Then the pushout $A\amalg_CB$ also has an arrow from $C$ — both paths around the square are the same — so it’s another object in the comma category. The arrows from $A$ and $B$ to $A\amalg_CB$ are compatible with the arrows from $C$, so they’re morphisms in the comma category. The upshot is that the pushout of the above diagram is the coproduct in the comma category. On the other hand, what if our category $\mathcal{C}$ has an initial object $I$? Then every other object $A$ has a unique arrow $I\rightarrow A$, and all morphisms in $\mathcal{C}$ are compatible with these arrows from $I$. That is, $\mathcal{C}$ is isomorphic to the comma category $(I\downarrow\mathcal{C})$. Then if $\mathcal{C}$ has pushouts we know that $(I\downarrow\mathcal{C})$ in particular has coproducts, and so $\mathcal{C}$ does too: coproducts are pushouts over an initial object when one exists. We’ve already seen a few more places that coproducts come up. The amalgamated free product of two groups over a third is a coproduct in $\mathbf{Grp}$. In particular, this means that free products of groups are coproducts, since they’re amalgamated over the trivial group. Also the amalgamated direct sum of modules is a pushout in $R-\mathbf{mod}$. Here’s one that we haven’t considered directly: let $R$ be a commutative ring with unit and let $A$ be an algebra over $R$ with unit. Then we have a homomorphism $R\rightarrow A$ of rings sending $r\in R$ to $r\cdot 1\in A$ — the action of $r$ on the unit of $A$. That is, an $R$-algebra is an object in the comma category $(R\downarrow\mathbf{Ring})$. Now if we have two such algebras, show that their tensor product $A_1\otimes_RA_2$ is a coproduct in this comma category, or equivalently a pushout in $\mathbf{Ring}$. The dual notion to a pushout is a pullback. Here we use the diagram: As for the other dual notions, you should go through the discussion of pushouts and write down the dualized versions explicitly. As a (somewhat complicated) exercise in pullbacks, note first that a pullback over $C$ is the same as a product in the comma category $(\mathcal{C}\downarrow C)$. Now consider the category $\mathbf{Gpd}$ of groupoids. Verify that if the functors from $A$ and $B$ to $C$ are both faithful, then the arrow from $A\times_CB$ to $C$ is faithful. In particular, if $C$ is a a group, then $A$ and $B$ are both (equivalent to) group actions, and their pullback will be another one. Check that if $A$ is equivalent to the action groupoid of $C$ acting on $S$ and $B$ is equivalent to the action groupoid of $C$ acting on $T$ then $A\times_CB$ is equivalent to the action groupoid of $C$ acting on $S\times T$ by the product action. This may be a bit difficult, but just working at it for a while should give some insights into how these things work. ### Like this: Posted by John Armstrong | Category theory ## 1 Comment » 1. [...] start by considering a category with pullbacks. A span in is a diagram of the form . We think of it as going from to . It turns out that we can [...] Pingback by | March 18, 2008 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 87, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507638812065125, "perplexity_flag": "head"}
http://wiki.math.toronto.edu/DispersiveWiki/index.php/NLS_wellposedness
# NLS wellposedness ### From DispersiveWiki In order to establish the well-posedness of the NLS in Hs one needs the non-linearity to have at least s degrees of regularity; in other words, we usually assume p is an odd integer, or p > [s] + 1. With this assumption, one has LWP for $s \ge 0, s_c\,$, CaWe1990; see also Ts1987; for the case $s=1\,,$ see GiVl1979. In the $L^2\,$-subcritical cases one has GWP for all $s\ge 0\,$ by $L^2\,$ conservation; in all other cases one has GWP and scattering for small data in $H^s\,$, $s\, \ge s_c.\,$ These results apply in both the focussing and defocussing cases. At the critical exponent one can prove Besov space refinements Pl2000, Pl-p4. This can then be used to obtain self-similar solutions, see CaWe1998, CaWe1998b, RiYou1998, MiaZg-p1, MiaZgZgx-p, MiaZgZgx-p2, Fur2001. Now suppose we remove the regularity assumption that $p\,$ is either an odd integer or larger than $[s]+1\,.$ Then some of the above results are still known to hold: • In the $H^1\,$ subcritical case one has GWP in $H^1\,,$ assuming the nonlinearity is smooth near the origin Ka1986 • In $R^6\,$ one also has Lipschitz well-posedness BuGdTz2003 In the periodic setting these results are much more difficult to obtain. On the one-dimensional torus T one has LWP for $s > 0, s_c\,$ if $p > 1\,$, with the endpoint $s=0\,$ being attained when $1 \le p \le 4\,$ Bo1993. In particular one has GWP in $L^2\,$ when $p < 4\,,$ or when $p=4\,$ and the data is small norm.For $6 > p \ge 4\,$ one also has GWP for random data whose Fourier coefficients decay like $1/|k|\,$ (times a Gaussian random variable) Bo1995c. (For $p=6\,$ one needs to impose a smallness condition on the $L^2\,$ norm or assume defocusing; for $p>6\,$ one needs to assume defocusing). • For the defocussing case, one has GWP in the $H^1\,$-subcritical case if the data is in $H^1\,.$ Many of the global results for $H^s\,$ also hold true for $L^{2}(|x|^{2s} dx)\,$. Heuristically this follows from the pseudo-conformal transformation, although making this rigorous is sometimes difficult. Sample results are in CaWe1992, GiOzVl1994, Ka1995, NkrOz1997, NkrOz-p. See NaOz2002 for further discussion.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9169924855232239, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/92445-help-factorising.html
Thread: 1. Help with factorising I am need of help please! We have just started doing factorising in math and I am stumped by the following: 21a^2y^3q-14y^4q^3+7y^2q I need to factorise this, I understand that I have to take out the common factors but don't know where to start! Thanks in advance! 2. $21a^2y^3q-14y^4q^3+7y^2q$ = $7(3a^2y^3q-2y^4q^3+y^2q)$ = $7y^2q(3a^2y-2y^2q^2+1)$ 3. Originally Posted by briggott I am need of help please! We have just started doing factorising in math and I am stumped by the following: 21a^2y^3q-14y^4q^3+7y^2q I need to factorise this, I understand that I have to take out the common factors but don't know where to start! Thanks in advance! Listen man, I just wanted to give you a quick breakdown. When factorin a polynomial, all you have to do is figure out what all the terms have in common, and then take out the biggest one you can. Look at this: $ab^2c^4x^7yz^9+a^2b^3cx^9y^3z+a^2b^{15}c^8x^4y^{10 0}z$. Obviously all of these terms have the factors $a,b,c,x,y,z$ in common. But which ones do we take out? We take out the LEAST COMMON FACTOR. Meaning that we factor out the ones with the smallest exponents. Let's start with a. what is the smallest exponent whch occurs with a? Well, it's 1. $a^1=a$. So you factor out an $a$ from each term leaving you with $a(b^2c^4x^7yz^9+ab^3cx^9y^3z+ab^{15}c^8x^4y^{100}z )$. And let's look at $y$. what's the smallest exponent that occurs with $y$? $y^1$! let's do it we already have an a out there, now will bring a y out too. $ay(b^2c^4x^7z^9+ab^3cx^9y^22z+ab^{15}c^8x^4y^{99}z )$. How bout $b$? $b^2$ is the least common factor sooooo $aby(bc^4x^7z^9+ab^1cx^9y^2z+ab^{13}c^8x^4y^{99}z)$. and you keep going until you there is nothing in common with the 3 terms anymore. REMEMBER, that is not to say that there won't be common factors in the first and second term, or the first and third, etc... No, because we aren't concerned with those. we began by factoring a TRINOMIAL not a BINOMIAL. So, always keep this in mind while factoring. Ask yourself: To what purpose am I trying to factor this expression. Is it to solve for all values of the variable, or to transform the expression so that it mimics the appearanc of another expression, etc... Don, just arbitrarily start factoring because you may end up doing more work than you had to. Well, so long, and happy math! I'll let you do the rest.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.95702064037323, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/42594-linear-independence-zero-vector.html
# Thread: 1. ## linear independence and the zero vector how would i prove that for a vector space V, a subset of V that contains the zero vector is linearly dependent? 2. Isn't any vector space containing the zero vector linearly dependent or am I misreading this? Consider the nonempty set $S = \{\bold{v}_{1}, \bold{v}_{2}, \hdots, \bold{v}_{n}, \bold{0}\}$. From here, we can see that: $0\bold{v}_{1} + 0\bold{v}_{2} + \hdots + 0\bold{v}_{n} + 1(\bold{0}) = \bold{0}$ i.e. 0 is a linear combination of the vectors in S whose coefficients aren't all 0. 3. thanks, just noticed after posting. 4. Originally Posted by o_O Isn't any vector space containing the zero vector linearly dependent or am I misreading this? Consider the nonempty set $S = \{\bold{v}_{1}, \bold{v}_{2}, \hdots, \bold{v}_{n}, \bold{0}\}$. From here, we can see that: $0\bold{v}_{1} + 0\bold{v}_{2} + \hdots + 0\bold{v}_{n} + 1(\bold{0}) = \bold{0}$ i.e. 0 is a linear combination of the vectors in S whose coefficients aren't all 0. I have a problem with that definition of linear dependence and this is the ideal thread to get some opinions. The definition of linear dependent I like to use is that any vector S can be constructed as a linear combination of the others. Consider the set S = {0, i, j). Linearly dependent or not? I say not since you can't construct i as a linear combination of 0 and j ..... (I do understand the argument when the other definition is used that S is dependent ..... I just don't like it ......) 5. isn't that the definition of span? linearly independent means that a_1*v_1+...+a_n*v_n=0 for scalars a_i and vectors v_i in the vector spaces foces all the scalars to equal 0. i don't see where the trouble is. 6. Originally Posted by squarerootof2 isn't that the definition of span? linearly independent means that a_1*v_1+...+a_n*v_n=0 for scalars a_i and vectors v_i in the vector spaces foces all the scalars to equal 0. i don't see where the trouble is. I miswrote my question. Since edited. 7. from my understanding, ANY vector that contains the zero vector cannot be linearly independent (by earlier argument), thus linearly dependent. isn't that true? 8. Originally Posted by squarerootof2 from my understanding, ANY vector that contains the zero vector cannot be linearly independent (by earlier argument), thus linearly dependent. isn't that true? ANY subset of vectors that contains the zero vector cannot be linearly independent. That is the correct statement. 9. Why does it have to be a subset that contains the zero vector? I thought it was any set of vector that contained it was linearly dependent .. well until mr. fantastic's example came up with his definition of linear independence. 10. Originally Posted by Plato ANY subset of vectors that contains the zero vector cannot be linearly independent. That is the correct statement. I am familiar with this statement. But if a set S of vectors is linearly dependent then you should be able to express each element of S as a linear combination of the others ..... For S = {0, i, j), how do you express i (or j) as a linear combination of the others? 11. Originally Posted by mr fantastic But if a set S of vectors is linearly dependent then you should be able to express each element of S as a linear combination of the others ..... Hmmm..... What about $S = \{(1,2),(2,4),(1,1)\}$? This set is clearly linearly dependent. But I cant express each element of S as a linear combination of the others. For example, (1,1) cannot be expressed as a linear combination of the others, yet S is linearly dependent. Considering your knowledge, perhaps you meant: Originally Posted by Mr.F meant But if a set S of vectors is linearly dependent then you should be able to express some element of S as a linear combination of the others ..... Technically this statement is wrong too, unless you define it that way. A set S is defined to be linearly dependent if it is not linearly independent. So technically(formally, logically...) you have to negate the definition of linear independence to get the mathematical formulation for linear dependence. And that formulation is the one you dont like.... I think this dilemma of implication and equivalence is pretty common. Generally if you have a set of vectors S where one is able to express some element of S as a linear combination of the others, then the set is definitely linearly dependent. However the other way round is not necessarily true. And your question works as wonderful illustration to this fact. These are my thoughts and it could be unconvincing. Perhaps some MHF Algebraist can explain it better. 12. Originally Posted by Isomorphism Hmmm..... What about $S = \{(1,2),(2,4),(1,1)\}$? This set is clearly linearly dependent. But I cant express each element of S as a linear combination of the others. For example, (1,1) cannot be expressed as a linear combination of the others, yet S is linearly dependent. Considering your knowledge, perhaps you meant: Technically this statement is wrong too, unless you define it that way. A set S is defined to be linearly dependent if it is not linearly independent. So technically(formally, logically...) you have to negate the definition of linear independence to get the mathematical formulation for linear dependence. And that formulation is the one you dont like.... I think this dilemma of implication and equivalence is pretty common. Generally if you have a set of vectors S where one is able to express some element of S as a linear combination of the others, then the set is definitely linearly dependent. However the other way round is not necessarily true. And your question works as wonderful illustration to this fact. These are my thoughts and it could be unconvincing. Perhaps some MHF Algebraist can explain it better. Thanks for this reply. In addition: Linear Dependence Theorem: The set { $v_1, \, v_2, \, .... v_n$} of non-zero vectors is linearly dependent if and only if some $x_k$, $2 \leq k \leq n$, is a linear combination of the preceding ones. Theorem: A set of two or more vectors $S =${ $v_1, \, v_2, \, .... v_n$} is linearly dependent if and only if one of the $v_i$ is a linear combination of the other vectors in S.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379324316978455, "perplexity_flag": "head"}
http://mathoverflow.net/questions/81680/recovering-a-matrix-instead-of-a-vector/81689
Recovering a matrix instead of a vector Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is known that given corrupt measurements $y = Af+e$ one can recover an input vector $f \in \textbf{R}^n$ exactly by solving a convex optimization problem. What if $f$ is instead a square matrix? Can we recover a matrix from corrupt measurements instead of just a vector? - 7 Isn't a matrix just a special case of a vector? That is, can't you "flatten" an $n\times n$ matrix into a vector in $\mathbb{R}^{n^2}$ and then solve the problem by the usual means? – Noah Stein Nov 23 2011 at 3:00 1 I'm not sure I agree with the assertion in the first sentence. It seems to depend strongly on the nature of corruption, and on the precision of measurements. – S. Carnahan♦ Nov 23 2011 at 5:31 1 Answer First, I emphasize S. Carnahan's comment: Exact recovery from noisy measurements is not that simple. "Exact recovery" usually means "recovery of the exact support of $f$". Moreover, sparsity assumptions for $f$ and special assumptions for $A$ and the size of $e$ are needed. To address your question: This again depends on a lot of things. Of course you can view this as $n$ multiple instances of the original problem and basically use the previous theory. Other structural assumption lead to other results (e.g. having a "joint sparsity pattern in the columns of $f$"). Buzzwords here are "joint sparsity" or "multiple measurement vectors". -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9185693860054016, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=1374456
Physics Forums Thread Closed Page 1 of 19 1 2 3 4 11 > Last » Recognitions: Homework Help Science Advisor ## Very simple QFT questions I have some extremely basic questions in QFT. First, P&S discuss causality in QFT in the first chapter of the book and, after showing that $<0| \phi(x)\phi(y)|0>$ does not vanish for spacelike intervals, they say "to really discuss causality, however, we should ask not whether particles can propagate over spacelike intervals but whether a measurement performed at one point....." What do they mean by this? I have my own interpretation but it's nontrivial and it may be wrong (they mention that so casually that it seems obvious to them). Of course, classically, it would make no sense to say that a a particle could propagate over spacelike intervals. In the quantum (and relativistic) world, the only way for me to make sense of what they are saying is that measuring the position of a particle precisely involves energies that necessarily lead to the creation of more particles than the ones already present in the system. Therefore, the very idea of checking whether a single particle that was at a point "x" at t=0 is now located at a point "y" at a time t is impossible in QFT. Second, how is a measurement actually defined in QFT? In NRQM, it's pretty clear. For a given hermitian operator, one finds it's eigenvalues and eigenstates, and so on. How is this defined in QFT? For example, consider the field $\phi(x)$ itself. Now, I always thought that this is not in itself an observable, so it does not make sense to talk about *measuring* phi. And yet, P&S talk about measurements of phi in the first chapter. Is that an abuse of language? One problem I find with QFT books is that there is no effort devoted to making the connection with NRQM. This is strange, it's the equivalent of teaching GR and never talking about how one may recover Newton's gravitation. One should be able to treat a simple sysytem (let's say the infinite square well!) and show in what way one may recover the NRQM result, within some limit! Does anyone know of a book that does that type of connection? Another question is about the classical limit of QED. People mention that a coherent state of photons correspond to the classical limit of classical E&M. But how does that work, exactly? I know that one can then replace the creation and annihilation photon operators by their expectation values, but the field obtained is still imaginary so it's not the classical field. There has to be much more work (with many subtleties involved, no doubt) before connecting with the classical treatment of E&M. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Blog Entries: 19 Recognitions: Science Advisor Quote by nrqed One problem I find with QFT books is that there is no effort devoted to making the connection with NRQM. This is strange, it's the equivalent of teaching GR and never talking about how one may recover Newton's gravitation. One should be able to treat a simple sysytem (let's say the infinite square well!) and show in what way one may recover the NRQM result, within some limit! Does anyone know of a book that does that type of connection? Excellent point! I discuss this problem in a somewhat different context in Sec. VIII of http://arxiv.org/abs/quant-ph/0609163 Sec. IX is also relevant for that issue. I also propose a solution of this problem in http://arxiv.org/abs/quant-ph/0406173 I started studying QFT from P&S too, and got stuck with nearly the same things. It's starting to look that there aren't very good answers to these questions, although I'm waiting replies to this thread still hopefully. Instead of answering, I'll throw another question about the basics of QFT (or about relativistic theory in general). Peskin & Shroeder explain in the beginning of their book, that a propagator $$K(x,y,T)=\int\frac{d^3p}{(2\pi)^3}e^{-i(E_{\boldsymbol{p}}T-\boldsymbol{p}\cdot(\boldsymbol{x}-\boldsymbol{y}))}$$ cannot be used, because it violates causality. Really so? If I assume a wavefunction (like in nonrelativistic theory) $$\phi(t,x)$$ to have a time evolution defined with $$\phi(t+T,x)=\int d^3y\; K(x,y,T)\phi(t,y)$$ a brief calculation then shows that the wavefunction satisfies the KG-equation, which is Lorentz invariant. Doesn't this mean, that this propagator gives Lorentz invariant time evolution? And how could Lorentz invariant time evolution violate causality? Besides, the integral in the propagator does not converge, but it merely behaves as a distribution when used correctly. Hence it doesn't look very smart to simply estimate if it looks zero or nonzero outside the lightcone. My point is, that not only is the explanation on how the causality is conserved, with the measurement interpretation, confusing, but so is also the explanation on why another propagator would instead violate causality. ## Very simple QFT questions Quote by jostpuur I started studying QFT from P&S too, and got stuck with nearly the same things. It's starting to look that there aren't very good answers to these questions, although I'm waiting replies to this thread still hopefully. Instead of answering, I'll throw another question about the basics of QFT (or about relativistic theory in general). Peskin & Shroeder explain in the beginning of their book, that a propagator $$K(x,y,T)=\int\frac{d^3p}{(2\pi)^3}e^{-i(E_{\boldsymbol{p}}T-\boldsymbol{p}\cdot(\boldsymbol{x}-\boldsymbol{y}))}$$ cannot be used, because it violates causality. Really so? If I assume a wavefunction (like in nonrelativistic theory) $$\phi(t,x)$$ to have a time evolution defined with $$\phi(t+T,x)=\int d^3y\; K(x,y,T)\phi(t,y)$$ a brief calculation then shows that the wavefunction satisfies the KG-equation, which is Lorentz invariant. Doesn't this mean, that this propagator gives Lorentz invariant time evolution? And how could Lorentz invariant time evolution violate causality? Lorentz invariance does not distinguish the past from the future, for instance in Maxwell theory you have to select the retarded propagator. The problem with locality for scalar fields stems from insisting that your solution contains positive energies only (the propagator is purely imaginary outside the lightcone). Allowing for negative energies can restore locality (but one believes such theories to be unstable once interactions are turned on). I just couple of minutes ago happened to hit the url http://www.physics.ucsb.edu/~mark/qft.html on these forums, started reading it, and noticed that Srednicki explains neccecity of commutator vanishing outside the lightcone quite differently. I haven't understood it myself yet, but it certainly looks worth cheking out. On the page 46 of the pdf. Recognitions: Science Advisor Quote by nrqed I have some extremely basic questions in QFT. First, P&S discuss causality in QFT in the first chapter of the book and, after showing that $<0| \phi(x)\phi(y)|0>$ does not vanish for spacelike intervals, they say "to really discuss causality, however, we should ask not whether particles can propagate over spacelike intervals but whether a measurement performed at one point....." It's basic yes, but apparently no so simple.... Different QFT books treat this subject quite differently (Zee, Feynman, P&S) and with different results. A thorough treatment should handle this entirely analytically instead of using approximations as done in most textbook. checked with numerical simulations. I did extensive numerical simulations of Klein Gordon propagation (in many different spacial dimensions) and one never sees any propagation outside the light cone. Also analytically one doesn't see anything outside the light cone. The concise mathematical expression for the Green's function in 3+1 dimensions, for forward propagation is: $$\Theta(t) \left(\ \frac{1}{2\pi}\delta(s^2)\ + \frac{m}{4\pi s} \Theta(s^2)\ \mbox{\huge J}_1(ms)\ \right), \qquad \mbox{with:}\ \ \ s^2=t^2-x^2$$ Where Theta is the Heaviside step function and J1 is the Bessel J function of the first order. The Theta at the left selects the forward propagating half while the other cuts off any propagation outside the light cone. Analytically, the Bessel function goes imaginary outside the lightcone and this is generally what becomes the part "outside the light cone" in the form of the Bessel I and Bessel K functions which become $$\mbox{\huge I}_1(mx)\ \rightarrow\ \frac{1}{\sqrt{2\pi mx}}\ \ e^{mx}, \ \ \ \ \mbox{\huge K}_1(mx)\ \rightarrow\ \sqrt{\frac{\pi}{2 mx}}\ \ e^{-mx}$$ for large x, typically K1 becomes exp(-mx) which is then usually given as the part outside the light cone. However, the concise derivation of the Green's function does produce the Heaviside step function which eliminates the propagation outside the lightcone. (Note that in the limit of m=0 the propagation outside the lightcone would become infinite!) One can find many variations of the analytical expression of the Klein Gordon propagator given above, sometimes with a negative sign for the Dirac delta function, which is wrong since this part becomes the photon propagator in the limit case where where m goes to zero, and should be positive. Sometimes one sees a different normalization factor. Also the Bessel function changes from text to text. Feynman, in 1948, with paper and pencil as the only tools to calculate (!) plus mathematical table books came to: $$-\frac{1}{4\pi}\delta(s^2)\ + \frac{m}{8\pi s} \mbox{\huge H}_1^{(2)}(ms)\ \right), \qquad \mbox{with:}\ \ \mbox{\huge H}_1^{(2)}(ms)= \mbox{\huge J}_1(ms)-i\mbox{\huge Y}_1(ms)$$ There's the sign, a factor 1/2 and the Hankel function obtained from the tables which is the Bessel equivalent of exp(ix)=cos(x)+isin(x). This is where the "propagation outside the lightcone" started historically: http://chip-architect.com/physics/KG...or_Feynman.jpg Another very popular (modern) textbook (Zee) handles it in I.3 formula (23). This is a also a hand waving approximation leading to the exp(-mr) light cone leaking. P&S then use a particular argument with particles and anti particles which would cancel out each others propagation outside the lightcone. to restore causality. (In chapter 2.4) This after they get the exp(-mr) term from a similar approximation. The simplest way in which you can convince yourself that there is no propagation outside the lightcone is by expanding like this: $$\frac{1}{p^2-m^2}\ =\ \frac{1}{p^2}+\frac{m^2}{p^4}+\frac{m^4}{p^6}+\frac{m^6}{p^8}+.....$$ The Fourier transform of this series leads to a series representing the Bessel J function. The first term is the massless propagator which is strictly on the light cone only. It's Fourier transform is the Dirac function in the space-time version of the propagator. The second term represent a massless propagator acting on a massless propagator, thus the propagation on the light cone becomes a source itself which is again propagated on the lightcone, and so on. Thus: None of the terms in the series has any propagation outside the light cone, and neither does the sum of the geometric series, The Klein Gordon propagator. Regards, Hans. PS: related stuff: http://functions.wolfram.com/PDF/BesselJ.pdf (also has the KG propagator) http://en.wikipedia.org/wiki/Bessel_function http://www.chip-architect.com/physic..._radiation.pdf With the latter paper and the series expansion you can derive the KG propagator in any dimension. Quote by jostpuur I just couple of minutes ago happened to hit the url http://www.physics.ucsb.edu/~mark/qft.html on these forums, started reading it, and noticed that Srednicki explains neccecity of commutator vanishing outside the lightcone quite differently. I haven't understood it myself yet, but it certainly looks worth cheking out. On the page 46 of the pdf. This argument is equivalent to demanding that spacelike separated measurements do not influence each other *statistically* (on the level of single events, this is not true - see the EPR paradox). Of course, it is perfectly legitimate to object that statistical assertions for one instant of time seem to contradict the very definition of statistics : quantum physicists interpret this again in terms of unrealized potentialities. In other words a phantom world which we will never observe. Quote by nrqed Second, how is a measurement actually defined in QFT? In NRQM, it's pretty clear. For a given hermitian operator, one finds it's eigenvalues and eigenstates, and so on. How is this defined in QFT? For example, consider the field $\phi(x)$ itself. Now, I always thought that this is not in itself an observable, so it does not make sense to talk about *measuring* phi. And yet, P&S talk about measurements of phi in the first chapter. Is that an abuse of language? Looking at this part of the question, I was very confused by Peskin and Schroder too. I found a brief discussion in Bjorken and Drell volume 2 section 12.3 entitled "Measurability of the Field and Microscopic Causality". The last paragraph states "In order to associate any physical content with this mathematical result , we must assume that it makes sense to attach physical meaning to the measurement of a field strength at at point, a concept already criticized in earlier paragraphs". So I think we are in good company when we are confused ! (B and J then quote a paper by Bohr and Rosenfeld which I don't have access too, but it sounds as if it might be quite useful - Phys Rev 78 p794 (1950) ) Quote by sheaf Looking at this part of the question, I was very confused by Peskin and Schroder too. I found a brief discussion in Bjorken and Drell volume 2 section 12.3 entitled "Measurability of the Field and Microscopic Causality". The last paragraph states "In order to associate any physical content with this mathematical result , we must assume that it makes sense to attach physical meaning to the measurement of a field strength at at point, a concept already criticized in earlier paragraphs". So I think we are in good company when we are confused ! (B and J then quote a paper by Bohr and Rosenfeld which I don't have access too, but it sounds as if it might be quite useful - Phys Rev 78 p794 (1950) ) IMO, it is ok to use a UV cutoff (which renders the field operator at a point well defined). At extremely high energies, we need new physics anyway. I was confused about the same thing after reading Peskin and Schroeder. I found a good explanation (at least I found it helpful) on the bottom Page 198 of Weinberg's "The Quantum Theory of Fields, Volume 1". Although you need to following the arguements Weinberg had been making in the first four chapters, he basically says it is best to think of the causality condition as something which is needed for the S-matrix to be Lorentz invariant, rather than thinking of it terms of measurements of field values at different points. Recognitions: Homework Help Science Advisor Quote by jostpuur I just couple of minutes ago happened to hit the url http://www.physics.ucsb.edu/~mark/qft.html on these forums, started reading it, and noticed that Srednicki explains neccecity of commutator vanishing outside the lightcone quite differently. I haven't understood it myself yet, but it certainly looks worth cheking out. On the page 46 of the pdf. Thanks for the link! This is a fantastic book! It is refreshing to see a QFT book that does not feel like a simple repeat of the same presentation again and again. It's clear that the author spent time thinking about presenting things from scratch and in a logical way. I especially dislike the conventional presentation which starts with the non sequitur that one must quantize classical fields (even if the fields on starts with have no classical correspondence at all, like the Dirac field!). It's only when I read Weinberg that I found that finally there was a textbook presenting QFT in a logical way, with the starting point that one must allow the number of particles to vary so one builds a Fock space and then it is the requirement of Lorentz invariance that leads to the need of introducing fields!! This book is closer in spirit to Weinberg but more transparent (who is very good but, let's admit it, quite heavy to follow sometimes) Recognitions: Science Advisor Quote by jostpuur I just couple of minutes ago happened to hit the url http://www.physics.ucsb.edu/~mark/qft.html [...] I downloaded the pdf, but the book appears to have no index!? Can anyone tell me whether the published version has a decent index? (Amazon doesn't have any "look inside" images, so I can't find out that way.) Great thread! It is refreshing to see so many people having the same questions that I had. It is sad that QFT textbooks did such a poor job in addressing these questions. Quote by nrqed Of course, classically, it would make no sense to say that a a particle could propagate over spacelike intervals. This is true. Quote by nrqed In the quantum (and relativistic) world, the only way for me to make sense of what they are saying is that measuring the position of a particle precisely involves energies that necessarily lead to the creation of more particles than the ones already present in the system. Therefore, the very idea of checking whether a single particle that was at a point "x" at t=0 is now located at a point "y" at a time t is impossible in QFT. This statement "localize one particle -> create more particles" is often repeated in QFT textbooks. However, I don't find it very convincing. It is true that by an accurate determination of position we increase the uncertainty of the momentum and energy of the particle. However, this does not mean that we increase the uncertainty of the *number of particles*. A relativistic (Newton-Wigner) position operator can be defined in QFT, and this operator commutes with the particle number operator. So, one can have a well-defined position of a single particle. Then what about superluminal propagation? It seems to be well-established that a wave function of a localized particle spreads out faster than the speed of light. And the usual wisdom says that this contradicts causality, because from the point of view of a moving observer the events of particle creation and absorption would change their time order. G. C. Hegerfeldt, "Instantaneous spreading and Einstein causality in quantum theory", Ann. Phys. (Leipzig), 7, (1998) 716. Below I will try to explain that the conclusion about violation of causality could be premature. Instantaneous spreading and causality do not necessarily contradict each other. There are two key points in my analysis. First is that "instantaneous spreading" refers to the wavefunction, and wavefunctions must be interpreted probabilistically. The second point is that particle localization is relative. If observer at rest sees the particle as localized, then the moving observer sees that particle's wave function is spread out over entire space. F. Strocchi, "Relativistic quantum mechanics and field theory", http://www.arxiv.org/hep-th/0401143 First take the point of view of an observer at rest O. This observer prepares particle localized at point A at t=0. At time instant t>0 he sees that the wave function has spread out superluminally. This means that the probability of finding the particle at a point B, whose distance from A is greater than ct, is non-zero. So far there is no contradiction. Now take the point of view of observer O', which moves with a high speed relative to O. As I said above, at time t=0 (by his own clock) observer O would see particle's wave function as spread out in space. There will be a maximum at point A. But there will be also a non-zero probability of finding the particle at point B. At later times the wave function will spread out even more. However, it is important that observer O' cannot definitely say that he sees the particle as propagating from B to A. He is seeing some diffuse probability distributions at all times, from which it is impossible to say exactly what is the speed and direction of particle's propagation. So, the situation in quantum mechanics is quite different from the classical mechanics, where propagation direction and speed always have a well-defined meaning, and causality is not compatible with superluminal propagation. Blog Entries: 19 Recognitions: Science Advisor Quote by meopemuk A relativistic (Newton-Wigner) position operator can be defined in QFT, and this operator commutes with the particle number operator. So, one can have a well-defined position of a single particle. The problem with this position operator is that it is not really relativistic covariant, but requires a preferred Lorentz frame. Quote by Demystifier The problem with this position operator is that it is not really relativistic covariant, but requires a preferred Lorentz frame. Demystifier: I often hear this opinion about Newton-Wigner position operator. However, I don't know where it comes from. Could you please explain? It seems to me that Newton and Wigner introduced this operator as an explicitly relativistic thing T. D. Newton and E. P. Wigner, "Localized states for elementary systems", Rev. Mod. Phys., 21, (1949) 400. Recognitions: Science Advisor Quote by meopemuk It seems to be well-established that a wave function of a localized particle spreads out faster than the speed of light. No such superluminal propagation of the wave function shows up neither with a concise analytical treatment giving exact solutions in configuration space, nor with extensive numerical simulations. See my old post above. The simplest way to convince oneself may be this series development of the Klein Gordon propagator: $$\frac{1}{p^2-m^2}\ =\ \frac{1}{p^2}+\frac{m^2}{p^4}+\frac{m^4}{p^6}+\frac{m^6}{p^8}+.....$$ Which becomes the following operator in configuration space: [tex]\Box^{-1}\ \ -\ \ m^2\Box^{-2}\ \ +\ \ m^4\Box^{-3}\ \ -\ \ m^6\Box^{-4}\ \ +\ \ .... [/tex] Where $$\Box^{-1}$$ is the inverse d'Alembertian, which spreads the wave function out on the lightcone as if it was a massless field. The second term then retransmits it, opposing the original effect, again purely on the light cone. The third term is the second retransmission, et-cetera, ad-infinitum. All propagators in this series are on the lightcone. The wave function does spread within the light cone because of the retransmission, but it does never spread outside the light cone, with superluminal speed. Regards, Hans. Quote by Hans de Vries No such superluminal propagation of the wave function shows up neither with a concise analytical treatment giving exact solutions in configuration space, nor with extensive numerical simulations. See my old post above. The simplest way to convince oneself may be this series development of the Klein Gordon propagator: $$\frac{1}{p^2-m^2}\ =\ \frac{1}{p^2}+\frac{m^2}{p^4}+\frac{m^4}{p^6}+\frac{m^6}{p^8}+.....$$ Which becomes the following operator in configuration space: $$\Box^{-1}\ \ -\ \ m^2\Box^{-2}\ \ +\ \ m^4\Box^{-3}\ \ -\ \ m^6\Box^{-4}\ \ +\ \ ....$$ Where $$\Box^{-1}$$ is the inverse d'Alembertian, which spreads the wave function out on the lightcone as if it was a massless field. The second term then retransmits it, opposing the original effect, again purely on the light cone. The third term is the second retransmission, et-cetera, ad-infinitum. All propagators in this series are on the lightcone. The wave function does spread within the light cone because of the retransmission, but it does never spread outside the light cone, with superluminal speed. Regards, Hans. Your conclusion directly contradicts results of Hegerfeldt and many other authors. Did you try to figure out what is wrong with their arguments? Thread Closed Page 1 of 19 1 2 3 4 11 > Last » Thread Tools | | | | |------------------------------------------------|-------------------------------|---------| | Similar Threads for: Very simple QFT questions | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 3 | | | Cosmology | 5 | | | Introductory Physics Homework | 12 | | | Classical Physics | 26 | | | Introductory Physics Homework | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327271580696106, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/4202/how-does-interpolation-really-work/4204
# How does Interpolation really work? I'm looking for some explanation or advice, not help in solving something. Recently I finished my program and my supervisor said "Ok, now it's time for your first paper: write a scientific text about how your program works". If I understand him correctly it means I have to describe the algorithm I used instead of writing in a manner like "for this purpose I use the built-in `Interpolation` function, and for this purpose I use `NDSolve`" etc. I know how `FindRoot` and `NDSolve` work because there is an explanation in the doc pages about the methods they use, but I did not find detailed information about `Interpolation`. The only thing I know it fits with polynomial curves. So my question is: What exactly does the `Interpolation` function do? How does it work? How does it determinate (partial) derivatives? And why do 3D data points have to be situated in rectangle order to interpolate a surface? If there is some literature I may read and then reference it would be great too. - My paper is about numerically implementation of some theoretical smooth model which deals with differential equations on a given surface. So the way surface is given is not highly important and polynomials (built-in interpolation) work fine. I don't think I have to re-invnet the wheel and create my own version of Interpolation function but I think I have to know how it works exactly. – darksowa Apr 13 '12 at 4:49 I just don't know: if " fitting polynomial curves between successive data points" is speciefed well-known method like Runge-Kutta for NDSolve or Newton for FindRoot, then my question is useless. But I guess (I don't know) there are many of them, so I want to know which one is implemented in Mathematica – darksowa Apr 13 '12 at 4:53 @lcanix, Interpolation uses several different methods. You could try different methods and see how they affect your solution. If no affect is seen, then the method may not be that important. If you see a influence it might be helpful if you could add a simple example in your text to see where the Interpolation algorithm might be going. – ruebenko Apr 13 '12 at 6:05 Were you able to find enough research material for your paper? Not to detract from mathematica in any way, but since your goal is to understand splines in general, start off with beta splines because they are so common and you can see them applied in a variety of contexts. For example, they are popular in animation programs for drawing character paths. If playing with parameters in mathematica is not inspiring, look at the animation program Maya on the autodesk website and look up path generation using splines. Hope that helps, Iceberg – user994 Apr 16 '12 at 6:36 # Interpolation function methods `Interpolation` supports two methods: • Hermite interpolation (default, or `Method->"Hermite"`) • B-spline interpolation (`Method->"Spline"`) ## Hermite method I really can't find any good reference to Hermite method within Mathematica's documentation. Instead, I recommend you to take a look at this Wikipedia article. The benefits of Hermite interpolation are: 1. You can compute them locally at the time of evaluation. No global system solving required. So construction time is shorter, and the resulting `InterpolatingFunction` is smaller. 2. Multi-level derivatives can be specified at each point. One problem is that the resulting function is not continuously differentiable ($C^1$ or higher), even if `InterpolationOrder->2` or higher is used. See the following example: ## Spline method To be specific, we are using B-spline interpolation with certain knot configuration--depending on the distribution of sample points. I could not find a good web source to describe the method (the Wikipedia article is not great). Although, you can find a step-by-step description of the method in 1D case within Mathematica's documentation (`BSplineCurve` documentation, Applications -> Interpolation section). Multi-dimension is simply tensor product version. The benefits: 1. `InterpolationOrder->d` always guarantees a smooth function of $C^{d-1}$ class. 2. Evaluation/derivative computation is very fast. 3. You can take `BSplineFunction` out of the resulting `InterpolatingFunction` (it's the 4th part), which is compatible with `BSplineCurve` and `BSplineSurface` for fast rendering. The problems (of current implementation in V8): 1. It is machine precision only--although, it is not hard to implement it manually for arbitrary precision using `BSplineBasis`. 2. It does not support derivative specification. 3. Initially it solves global linear system and store the result. So the resulting function is much larger than Hermite method (this is not implementation problem). # Other functions Some plot functions such as `ListPlot3D` have their own methods. Sometimes they call the B-spline method, sometimes they use a method based on distance field (for unorganized points), etc. But probably it is not useful here since they are only supported as a visual measure. - I understand cubic spline interpolation has a few different varieties depending what the derivative does at the end points. I thought InterpolationOrder->3 used a cubic spline interpolation. Is a cubic spline interpolation a specific case of a B-Spline? – Ted Ersek Apr 13 '12 at 21:57 1 Yes, so called cubic spline interpolation is a special case of B-spline interpolation. Now the problem is that the current Mathematica implementation uses something called "clamped" knot configuration, where as the cubic spline interpolation uses "unclamped" or "natural" configuration. The difference lies in how the end points (derivatives) are being treated. This link has a brief explanation. – Yu-Sung Chang Apr 14 '12 at 1:27 @Yu-Sung What is the relation between `InterpolationOrder->n` (`Method->"Spline"`) and commonly used term "n-point spline interpolation"? – Alexey Popkov Oct 5 '12 at 1:43 @Yu-Sung Chang: Do we know that `Interpolation` uses only those two methods? All that `ref/Interpolation` says is, "`Interpolation` supports a `Method` option. Possible settings include '`Spline`' for spline interpolation and '`Hermite`' for Hermite interpolation." [emph added] – murray Apr 21 at 15:06 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224181175231934, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/55029/ramified-extension-in-number-theory
# Ramified extension in number theory Assume that $K$ is a complete field under a discrete valuation with Dedekind ring $A$ and maximal ideal $\mathfrak p$ and $A\diagup\mathfrak p$ is perfect. Let $e$ be a positive integer not divisible by $E$. Let $E$ be a finite extension of $K$, $\pi_0$ a prime element in $\mathfrak p$, and $\beta$ an element of $E$ such that $|\beta|^e=|\pi_0|$. Then there exists an element $\pi$ of order one in $\mathfrak \pi$ s.t. one of the roots of the equation $X^e-\pi=0$ is contained in $K(\beta )$. I don't see that if $E/K$ is a finite extension then $E/K$ is a totally ramified extension as the proof claims. - 3 -1: You did not give enough context (even in the linked to page). What fields are you talking about? Discretely valued fields? Local fields? – Pete L. Clark Aug 1 '11 at 22:46 1 Judging by the typesetting this is from somewhere in Ch. 2 of Lang's book; but you should include more context. – Dylan Moreland Aug 2 '11 at 5:06 ## 2 Answers In general, we have $ef=n$, where $f$ is the residue class field extension degree, and $e$ is the ramification, and $n$ is the degree. Here, the hypothesis $|\beta|^e=|\pi_0|$ does not suggest (nor imply) that $E/K$ is totally ramified, nor that the subextension $K(\beta)/K$. If you add the hypothesis that $E/K$ is totally ramified, then the residue class field extension degree $f$ is $1$, and $K(\beta)/K$ is totally ramified, so $\beta^e=\eta\cdot \pi_0$ with a unit $\eta$ at first in the integers of the extension... but, since the residue field extension is trivial, we can "correct" $\eta$ by units in the ground field to get $\pi$ in the groundfield so that $X^e-\pi=0$ behaves as you want. - I checked: this is a Lemma on p. 53 of Lang's Algebraic Number Theory. The proof begins: "We can write $\beta^e = \pi_0 u$ with $u$ a unit in $B$ [the integral closure of $A$ in $E$]. Since the extension is totally ramified..." Edit: What I wrote in a previous version was wrong; I hadn't read carefully enough. I am currently of the opinion that "$E/K$ is totally ramified" is simply missing as a hypothesis to this Lemma. My evidence is: 1. It is assumed at the beginning of the proof! 2. The Lemma is used (only) to prove Proposition 12, which has the hypothesis that $E/K$ is totally and tamely ramified. - Okay. I wasn't sure what extension. Well, I'm still learning some basic stuff on algebraic number theory. – Jaska Aug 2 '11 at 13:52 @paul: As you can now see, you were right all along. (I actually tried to delete my answer, but since it was accepted, I couldn't!) – Pete L. Clark Aug 2 '11 at 13:56 Oh, sorry. You can delete your answer now if you want. – Jaska Aug 2 '11 at 14:04 1 @Pete... ah, ok! I put back my "answer"... I remember being surprised the first time I read Lang, that there were no "exercises", but I eventually realized that getting such details straight was the meta-exercise. :) – paul garrett Aug 2 '11 at 14:16 @paul: Hence the old "vengeance." In the 2nd Edition of Lang's Algebra, he put in a section on homological algebra due to pressure, even though he did not think it was worthwhile. The only exercise in the section was "Go to the library, check out a book on Homological Algebra, and prove all the theorems without looking at the proofs in the book." At the time, the only book on homological algebra was Cartan-Eilenberg; rumor has it that either in the next edition of the latter, or in a loose-leaf errata (cont) – Arturo Magidin Aug 2 '11 at 18:25 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9638510942459106, "perplexity_flag": "head"}
http://mathoverflow.net/questions/48469?sort=oldest
## Sequential sampling of Gaussian and von Mises-Fisher Random Variable ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I don't find any article discussing this problem, so I dare to ask it. Suppose we are dealing with a data $x_0 \in \mathbb{R}$ and a function $f:\mathbb{R} \to \mathbb{R}$. Say we repeatedly apply $f$ on the initial data to generate a sequence $(x_0, x_1, \cdots)$ where $x_1 = f(x_0), x_2 = f(x_1), x_3 = f(x_2), \cdots$. It is guaranteed that the function $f$ will converges to a point $x'$, but I'm only interested in computing sequence up to $x_n$. Now I want to see what happens if the sequence is perturbed. Let's define the perturbed sequence $(\tilde{x}_0, \tilde{x}_1, \cdots)$ as follows. • $\tilde{x}_0 = x_0$ • For $\tilde{x}_i$ with $i > 0$, first compute $x_i = f(\tilde{x}_{i-1})$ and pick a random point from a Gaussian distribution $N(x_i, \sigma)$ with some fixed $\sigma$. This random point will be picked independently from $(\tilde{x}_0, \cdots,\tilde{x}_{i-2})$ but only depends on the previous point $\tilde{x}_{i-1}$. I want to generalize this experiment such that the data $x$ live in higher dimension or even on a hypersphere. Or maybe using some distributions other than gaussian. But first of all, I need to know the distribution of $\tilde{x}_n$. I'm guessing the perturbed value $\tilde{x}_n$ will have some distribution (probably gaussian) centered at $x_n$ with a variance being a function of $n$ and $\sigma$, but it's somewhat hard to find such a distribution. If it's impossible to compute the distribution, I'd like to show that the distribution of $\tilde{x}_n$ is sharply concentrated around $x_n$. Does my guessing seem reasonable? Then how can I solve this? Any suggestion? updated later. Previously, I phrased my question in a general terminology since I worried that my question would be too focused and narrow. Now I'll enunciate my original question. I'm working on tweaking power iteration to find a dominant eigenvector $v_1$ for a given symmetric and semi-definite matrix $A\in\mathbb{R}^{d\times d}$. The power iteration initially picks a random vector $q_0$ and computes $q_1 = \frac{Aq_0}{||Aq_0||}$, $q_2 = \frac{Aq_1}{||Aq_1||}$, ... , $q_n = \frac{Aq_{n-1}}{||Aq_{n-1}||}$. We can regard it as $n$ rounds of 'normalized matrix-vector multiplication.' The normalization guarantees that each $q_i$ is a unit vector. As long as the initial vector $q_0$ is not perpendicular to $v_1$, $q_n$ converges to $v_1$. In my setting, the computation of $\frac{Aq_{i-1}}{||Aq_{i-1}||}$ is perturbed; it returns a perturbed unit vector $\tilde{q}_i$ which is modeled as a von Mises-Fisher (vMF) distribution over a hypershpere in $\mathbb{R}^d$. For the mean vector $\mu$ and the concentration parameter $\kappa$, the pdf of vMF distribution in $d$-dimensional hypersphere is $f_d(x; \mu, \kappa) = C_d(\kappa) \exp(\kappa\mu^T x)$ with some normalization factor $C_d(\kappa)$. The greater the value of $\kappa$, the higher the concentration of the distribution around the mean direction $\mu$. (I assume that $\kappa$ is fixed for the entire round.) It means each round returns $\tilde{q}_i$, a unit vector randomly picked around the original vector $q_i = \frac{A\tilde{q}i}{||A\tilde{q}{i-1}||}$. I assume that the perturbed power iteration also converges to the dominant eigenvector but under a restricted condition. But finding out the asymptotic behavior of $\tilde{q}_n$ is somewhat tricky since for each round a vMF random variable undergoes normalized matrix-vector multiplication and maps to some distribution other than vMF distribution. My question is: 1. Does $\tilde{q}_n$ still converge to the dominant eigenvector? 2. Is there any reference to find out the distribution of $\tilde{q}_n$? Or can I show it sharply concentrated around the dominant eigenvector? 3. Is there any useful distribution on a hypersphere other than vMF that will help analyzing the distribution of $\tilde{q}_n$? - So to rephrase in more probabilistic notation, let $N_1, N_2, \dots$ be iid $N(0, \sigma^2)$, set $X_0 = x_0$, and $X_{i+1} = f(X_i) + N_i$. You want to know the asymptotic distribution of $X_n$. Do you know anything about $f$ besides that the iterates $f^i(x_0)$ converge? Does $f^i(x)$ converge for other values of $x$? Is $f$ a contraction? – Nate Eldredge Dec 6 2010 at 19:43 I like the updated part of this question +1. I recommend deleting the first part (However, don't do this if you have some specific reason not too). If you don't mind me asking, in what application does this problem occur? Also, what justifies your use of the von Mises Fisher distribution here? – Robby McKilliam Dec 6 2010 at 21:17 to Robby McKilliam, 1) It's one way to guarantee privacy while analyzing high dimensional data. As you can see from the wikipedia article en.wikipedia.org/wiki/Differential_privacy, adding a Laplace noise is a standard technique. It is well known that gaussian noise is also applicable. 2) Unlike other application, power iteration must work on unit vectors. And vMF seems a simple and reasonable noise I can choose. I considered Gaussian perturbation to each $q_i$ and mapping it onto a hypersphere, but this distribution is too cumbersome to analyze. – Federico Magallanez Dec 6 2010 at 22:09 ## 1 Answer Edit: The following statement is an "answer" to the first part of the question and should be ignored, since the "real meat" is in the addendum :-) This is a rather general question, let me point out two directions in which one could proceed: 1. What you describe is an example of a discrete approximation to a stochastic differential equation via an Euler schema, for more details have a look at this page on the wiki Azimuth: stochastic differential equation. It is possible to define stochastic differential equations in higher dimensions, of course, and on smooth real manifolds, for example. The discrete Euler schema for an equation with additive white noise adds a Gaussian random variable on every timestep, but it is possible to define equations with different noise/random processes, see for example stochastic integral on wikipedia. 2. You stick to discrete time equations and processes, in this case a good buzz word to look for are "random iterative models", see for example the book of the same name by Marie Duflo, (review in ZMATH here). I'm sure there are a lot more buzz words connected to your question :-) About calculating the distribution: There are examples in continuous and in discrete time where it is possible to calculate the distribution exactly, or to prove certain concentration theorems. In continuous time, there are for example SDE where exact solutions are known, resp. associated Fokker-Planck equations where the solution is known exactly. An example is the Ornstein-Uhlenbeck process. There it is possible to explicitly state the distribution of $x(t)$ given $x_{t=0} := x_0$. As for discrete times, there are several theorems about the rate of convergence to the stationary distribution for certain Markov chains in Duflo's book. - Thank you for quick reply! I'll check the book you mentioned. – Federico Magallanez Dec 6 2010 at 20:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9123743772506714, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/26860-difference-equations-again-d.html
# Thread: 1. ## Difference Equations.. Again :D Given that the following sequence, Q_{n} , satisfies a homogeneous second order difference equation with constant coefficients find Q_{n} : $Q_{1} = 1, Q_{2} = 3, Q_{3} = 4, Q_{4} = 7, Q_{5} = 11, Q_{6} = 18, ...$ I reckon that it would have to start with $AQ_{n} + BQ_{n - 1} + CQ_{n - 2} = 0$ and do i then just put in the various numbers and solve them simultaneiously? 2. exactly, and after you get the constant coefficients you can solve the homogeneous equation to find Qn. 3. just a quick question, when solving them simultaneously, would i just have 18A + 11B + 7C = 0 4A + 3B + C = 0 4. you have to have 3 equations in order to find an unambiguous solution for the constants. 5. Ok, so ill add 7A +4B +3C to that, but how do i re-arrange for solving? 6. Originally Posted by brd_7 Ok, so ill add 7A +4B +3C to that, but how do i re-arrange for solving? I'm sorry if I sound rude, but if you deal with a topic in "advanced" algebra such as difference equations you must at least know how to solve a system of linear equations. If you don't know or forgot, there are plenty of resources on the internet such as: System of linear equations - Wikipedia, the free encyclopedia 7. Nah you wern't rude by any means, its just i get given work that hasnt been explained properly, i then look at the notes and the examples are really poor, and so out of frustration i forget how to do the simplest of things. But, through simultaneous equations i keep getting - -5A - 5B = 0 -5A - 5B = 0 Which gets me nowhere.. Or at least i think.. 8. $<br /> <br /> AQ_{n} + BQ_{n - 1} + CQ_{n - 2} = 0<br />$ to get the first equation substitute n=3: $AQ_3 + BQ_2 + CQ_1 = 0$ $4A + 3B + C = 0<br />$ now n=4: $\begin{gathered}<br /> AQ_4 + BQ_3 + CQ_2 = 0 \hfill \\<br /> 7A + 4B + 3C = 0 \hfill \\ <br /> \end{gathered}$ now n=5: $\begin{gathered}<br /> AQ_5 + BQ_4 + CQ_3 = 0 \hfill \\<br /> 11A + 7B + 4C = 0 \hfill \\ <br /> \end{gathered}$ 9. $<br /> <br /> AQ_{n} + BQ_{n - 1} + CQ_{n - 2} = 0<br />$ to get the first equation substitute n=3: $AQ_3 + BQ_2 + CQ_1 = 0$ $4A + 3B + C = 0<br />$ now n=4: $\begin{gathered}<br /> AQ_4 + BQ_3 + CQ_2 = 0 \hfill \\<br /> 7A + 4B + 3C = 0 \hfill \\ <br /> \end{gathered}$ now n=5: $\begin{gathered}<br /> AQ_5 + BQ_4 + CQ_3 = 0 \hfill \\<br /> 11A + 7B + 4C = 0 \hfill \\ <br /> \end{gathered}$ $<br /> \left\{ {\begin{array}{*{20}c}<br /> {4A + 3B + C = 0} \\<br /> {7A + 4B + 3C = 0} \\<br /> {11A + 7B + 4C = 0} \\<br /> <br /> \end{array} } \right.<br /> <br />$ as you can see the third equation is the sum of the forst two, so the equations are linearly dependent, that's why you had a problem. after solving the system we get that the general solution is: A = -B = -C let's choose A=1, B=C=-1 thus: $<br /> Q_n - Q_{n - 1} - Q_{n - 2} = 0<br />$ you can notice that we've got the fibonacci sequence. 10. Ok, i should be able to do the rest then. Thanks for the help
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259943962097168, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/45095-extremes-f-x.html
# Thread: 1. ## Extremes of f"(x) Assuming the second derivative f"(x)>0 of a convex function f(x) exists over an interval (a,b). Is it correct saying that the extremes (min, max, relative or absolute) of f"(x) always occur at the same ordinates where the extremes* of f(x) occur? (*) Not necessarily of the same type, min or max as in f”(x): can be opposite. I cannot depict an example that says "NO, it's not correct". I would appreciate either such an example or few words that help me proving the above statement is correct. 2. Assuming the second derivative f"(x)>0 of a convex function f(x) exists over an interval (a,b). Is it correct saying that the extremes (min, max, relative or absolute) of f"(x) always occur at the same ordinates where the extremes* of f(x) occur? The answer is no. As an example I think the following function works, $f(x)=x^8+1$. 3. ## Why? Originally Posted by arbolis The answer is no. As an example I think the following function works, $f(x)=x^8+1$. Isaac, I might miss something here but I do see your example proving that my statement is just correct. Plot f(x) = x^8 + 1 and f"(x) = 56*x^6. I see that in any interval (a,b) both f(x) and f"(x) reach max or min at the same ordinate. Also, I tried defeating my statement with exponentials, logarithms, trigonometrics functions, basic polinomials. It always came out clean and valid. I'd really need to see a function that proves the contrary. 4. Isaac, I might miss something here but I do see your example proving that my statement is just correct. Plot f(x) = x^8 + 1 and f"(x) = 56*x^6. I see that in any interval (a,b) both f(x) and f"(x) reach max or min at the same ordinate. Also, I tried defeating my statement with exponentials, logarithms, trigonometrics functions, basic polinomials. It always came out clean and valid. I'd really need to see a function that proves the contrary. You're right. The more I'm thinking about it, the more I believe the statement is true. I'll try to prove it but I'm not sure I'll success, so I hope someone else will do it. Anyway it's an interesting problem. Any news I get, I post. 5. ## Theorem I started from this theorem: Assume f'(x) exists over (a,b). Then, if f' is monotonic over (a,b), f is certainly convex over (a,b). Special case: f is certainly convex over (a,b) if f" exists and is non-negative over (a,b). 6. What about $f(x)=x^2$. $f''(x)=2>0$, so f is convex as it must. Now looking at Is it correct saying that the extremes (min, max, relative or absolute) of f"(x) always occur at the same ordinates where the extremes* of f(x) occur? (*) Not necessarily of the same type, min or max as in f”(x): can be opposite. , I'd say "no" because any point given is an extrema for $f''$, but not for $f$. I mean that for example let $(a,b)=(-1,1)$. Any given point in $(-1,1)$ is a maximum and a minimum for $f''$, according to the definition of an extrema of a function. While only $x=0$ is a minimum in $(-1,1)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9036678671836853, "perplexity_flag": "middle"}
http://theoreticalatlas.wordpress.com/2007/09/25/coming-attractions/
# Theoretical Atlas He had bought a large map representing the sea, / Without the least vestige of land: / And the crew were much pleased when they found it to be / A map they could all understand. September 25, 2007 ## Coming Attractions Posted by Jeffrey Morton under category theory, groupoids, higher dimensional algebra, physics, tqft [5] Comments Due to the rapid-fire the nature of the blogosphere (or, in deference to John Armstrong, the “Blathysphere”, or maybe “blathyscape”), my blog (“blath”) has been discovered before I expected, and in particular before I’ve had the chance to put anything very interesting in it. So here I’ll just say something about “coming attractions” – a sort of mid-level executive summary of the next batch of things I expect to be working and commenting on. Also possibly later on I should have a math post or two about some talks I saw recently. Since I graduated at UCR in June, I haven’t had much chance to do any actual work – partly because I broke my wrist in a bike accident, and lost the use of my writing hand for six weeks. Between that and the hassle of moving, I wasn’t able to do much but some reading. Now that the cast is off, I’ve been getting back to work. The first “real” research-related post I expect to make will be an announcement that a (slightly) polished version of my dissertation, “Extended TQFT’s and Quantum Gravity” has been released on the preprint archive – hopefully this week. That in turn should kick off some descriptions of what’s inside as I get more into the process of turning it into some smaller, more digestible papers. These will fall, at first, into three parts: 1) A paper which has already been posted as math.CT/0611930, describing how to get a “double bicategory” of cobordisms with corners, and from that, a bicategory. Here I explain how cobordisms are cospans of manifolds with boundary, so the new structures are double cospans of manifolds with corners, and how that works. This may end up being two parts. One is a decription of Dominic Verity‘s notion of a “double bicategory”, an aside on how to interpret it as a special case of bicategories internal to $\mathbf{Bicat}$, and how to get one from double spans (functors $DS:\Lambda^2 \rightarrow C$). Marco Grandis has a pretty thorough description of these in this paper and its sequels, although our approaches are slightly different. The second part has to do with how to apply this to cobordisms with corners (cobordisms between cobordisms) – also something Grandis discusses in the second paper of that series. I also need to show how to collapse the more complicated structure to a mere bicategory, in order to do what I will want to do in part (3) below. There’s an issue here I’ll want to think about at some point, related to a question Aaron Lauda raised. The question was this. The category whose objects are 1-D manifolds and whose morphisms are 2D cobordisms between them has a nice abstract description. It is the free symmetric monoidal category with a Frobenius object. In Aaron’s work with Hendryk Pfeiffer, they likewise described a category of “open closed strings”, which can have either 1-D manifolds or 1-D manifolds with boundary (collections of circles and line segments, basically) as objects, and cobordisms between them as morphisms. They showed this has a similar characterization, but with “Knowledgeable Frobenius” replacing “Frobenius” in the above. These have a nice description in terms of adjunctions, so Aaron was asking me if the same could be done for the double bicategory I talk about. That would need a concept of adjunction in double categories (or cubical n-categories, more generally). I don’t know what the state of understanding is on this. More generally, it’s strange that “cobordisms of cobordisms” really wants to be a cubical 2-category in some sense, whereas, to do what I want to do with them (see below), I have to convert them into a globular one, to take functors into $\mathbf{2Vect}$. I don’t know the best way to deal with this: is there a cubical version of $\mathbf{2Vect}$, for example? 2) One part will deal with building 2-vector spaces from groupoids using functors into the category $\mathbf{Vect}$; and 2-linear maps from spans of groupoids, using the pullback (composition) along an inclusion, and its (two-sided) adjoint. Along the way, it includes some proofs of well-known folklore theorems about 2-vector spaces which are hard to find anywhere. I plan to give a talk based on this at Groupoidfest ’07 in Iowa City in November. Soon enough – certainly before the Groupoidfest, I’ll have a bigger post about this stuff (and most likely post slides). The basic idea is that the category of functors from an essentially finite groupoid $X$ into $\mathbf{Vect}$ is a Kapranov-Voevodsky 2-vector space – that is, a \$\mathbbm{C}\$-linear additive category which is generated by a finite number of simple objects. (The fact that this definition is equivalent to the one given by Kapranov and Voevodsky is one of those theorems which seems to be well known, but hard to track down). The finite number of simple objects correspond to the equivalence classes of $X$. From a span of groupoids, it is possible to build a linear map between the corresponding 2-vector spaces. The motivation for building 2-vector spaces on groupoids in the new work is to categorify the quantization of a classical system, but the two ways I’ve looked at are a bit different in how they accomplish it. Ignoring complications like symplectic geometry for the moment, the configuration space of a classical system is described as a set $X$. Each element of the set is one possible state of the system. The corresponding quantum system will have states which live in $L^2(X)$ – in particular, they are complex-valued functions on the set $X$. And instead of being able to read off values like position, momentum, energy, and other features of the system by looking at the value these have at a single point, you need some algebra of operators on $L^2(X)$, whose eigenvalues are the values you can observe for the observable that corresponds to a given operator. In categorifying this, $X$ becomes a groupoid, in which the elements of the set can be related to each other – by “symmetries”. Instead of functions into the complex numbers, we take functors into $\mathbf{Vect}$, and obtain a 2-vector space of what I suppose should be called “2-states”. Given spans of groupoids, it becomes possible to get linear maps from one 2-vector space to another, using “pullback” and “pushforward” of these functors into $\mathbf{Vect}$. I’ll say more about this later on, but one thing that I find perplexing about this is how (if at all), it relates to some earlier work I did in this paper on the categorified harmonic oscillator, which is heavily based on this paper by John Baez and Jim Dolan, which introduces “stuff types”. Both involve groupoids, and spans of groupoids giving rise to linear operators, as part of a categorification of some elementary quantum theory, but there are significant differences. At some point, I’d like to return to the question of whether they’re related, and if so, how. 3) One part uses the above to build an “extended TQFT”. A TQFT, or topological quantum field theory is a quantum field theory, in that it gives a Hilbert space of states for some field on a specifed “space” (i.e. manifold), and linear maps associated to “spacetimes” (cobordisms) joining them. It is topological, in that its states are topologically invariant – that is, they have no local degrees of freedom, only global ones. These started life in physics, but have fallen by the wayside there, and now mostly find life in the subject of quantum topology, where they give manifold invariants. A TQFT can be described as a functor from a category of manifolds and cobordisms (see (1)) into $\mathbf{Vect}$. This way of putting it makes it relatively easy to see what to do if one wants to categorify – which we do, in order to get higher codimension (more on this later, I’m sure). The idea is to build a 2-functor from the bicategory of cobordisms with corners (see (1)) into $\mathbf{2Vect}$. This can be done using gauge theory. The main idea is to turn a cobordism, seen as a cospan of manifolds (with corners) into a span of groupoids – namely, the groupoids of flat connections on these spaces, with gauge transformations as morphisms, and then build 2-vector spaces and 2-linear maps, etc. as laid out in the program of (2) above. The main theorem proving that such a 2-functor exists and is given by this construction was the organizing theme of my dissertation defense talk. This part is the mathematical core of what I’ve been working on. 4) Finally, this is supposed to be related to quantum gravity somehow. I’ll put off talking about this until I actually put the thesis on the archive. Until then, I may decide to post a little about some talks I’ve been to recently. UWO has a great department with lots of interesting talks. I recently attended a couple of these by graduate students. One was by Arash Pourkia, about Braided Categories and Hopf Algebras. The second was by Michael Misamore, on Galois Theory – from the point of view of Grothendieck, and could equally well be called “Covering Spaces”… from the point of view of Grothendieck. ### 5 Responses to “Coming Attractions” 1. nbornak Says: September 25, 2007 at 11:05 pm It’s generally the nature of the internet that things get discovered before their authors or creators want them to be. Welcome! I wish I could see you talk at Groupoidfest, but the Math GREs are scheduled right in the middle of it (and several other conferences)! What terrible timing. 2. John Armstrong Says: September 26, 2007 at 12:44 am “Blathyscape”. I like it. It’s deeply satisfying. Anyhow, I’m glad someone else is talking about viewing cobordisms as cospans. It’s one of the biggest tools in my own arsenal, at least in the case of 3-manifolds with embedded 1-manifolds. Of course, you beat me to the publication punch, but if there’s only the two of us talking there’s plenty of elbow room. 3. Jeffrey Morton Says: September 26, 2007 at 1:41 am John: Well, it’s not just the two of us – check out those papers by Marco Grandis, where he talks about cobordisms as examples of “collared cospans”, so that the two maps are inclusions which factor through the inclusion of collars. It’s a nice way to handle the problem of needing these collars for smooth structure. As for precedence – no idea. I remember spontaneously thinking of cobordisms that way myself, but that doesn’t mean much. His first paper has been published, but cites mine, which hasn’t. Modern research seems like a whole crowd of giants struggling to stand on each others’ shoulders – or anyway, they might be giants. Possibly windmills. 4. John Armstrong Says: September 26, 2007 at 2:44 am Actually, that inclusion is more like the one I use. I try to work in the smooth category of tangles. It makes life so much easier than in the PL category, though I had to reprove Reidemeister in this context since I couldn’t find anyone who’d written the darn thing down! 5. October 6, 2007 at 3:32 pm [...] I’m the first (and so far only) to use them in knot theory like I do. However, Jeff Morton has noted that cobordisms are a sort of cospan, and my use of tangles is analogous to [...] %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523364305496216, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/tagged/copula
# Tagged Questions The copula tag has no wiki summary. 0answers 61 views ### Estimation of ranks of log-returns via copula I have successfully chosen and estimate a copula for the ranks of the log-returns of my actions. My question is, since I have worked with the ranks instead of directly the log-returns (in order to be ... 2answers 416 views ### Copula models and the distribution of the sum of random variables without Monte Carlo There is a vast literature on copula modelling. Using copulas I can describe the joint law of two (and more) random variables $X$ and $Y$, i.e. $F_{X,Y}(x,y)$. Very often in risk management (credit ... 1answer 77 views ### Is there a copula that can estimate negative tail dependence? I have encountered numerous copula estimators that can estimate time-invariant and time-varying linear and non-linear correlations on the interval $[-1,1]$, and these estimators are fully consistent ... 2answers 199 views ### Generate correlated random variables from Normal and Gamma distributions I want to generate a random vector $z$ of dimension $k+m$ with some given correlation matrix $\Sigma$, such that the first $k$ elements of the vector are distributed normally and the last $m$ elements ... 3answers 383 views ### Do I need a copula to accurately estimate the VaR of a portfolio of risky assets? I need to estimate the daily VaR of a portfolio of various exposures in $n$ risky assets (say equity futures). The simplest approach, I think, would be to just estimate VaR from a multivariate normal ... 1answer 305 views ### copula-marginal algorithm has there been any interesting work or advances on the copula-marginal algorithm (CMA) as proposed by Attilio Meucci. I am unable to find anything on the web other then the original article, here is ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8804558515548706, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Abundant_number
# Abundant number In number theory, an abundant number or excessive number is a number for which the sum of its proper divisors is greater than the number itself. 12 is the first abundant number. Its proper divisors are 1, 2, 3, 4 and 6 for a total of 16. The amount by which the sum exceeds the number is the abundance. 12 has an abundance of 4, for example. ## Definition A number n for which the sum of divisors σ(n)>2n, or, equivalently, the sum of proper divisors (or aliquot sum) s(n)>n. Abundance is the value σ(n)-2n (or s(n)-n). ## Examples The first few abundant numbers are: 12, 18, 20, 24, 30, 36, 40, 42, 48, 54, 56, 60, 66, 70, 72, 78, 80, 84, 88, 90, 96, 100, 102, … (sequence in OEIS). For example, the proper divisors of 24 are 1, 2, 3, 4, 6, 8, and 12, whose sum is 36. Because 36 is more than 24, the number 24 is abundant. Its abundance is 36 − 24 = 12. ## Properties • The smallest odd abundant number is 945 • The smallest not divisible by 2 or by 3 is 5391411025 whose prime factors are 5, 7, 11, 13, 17, 19, 23, and 29 (sequence in OEIS). An algorithm given by Iannucci in 2005 shows how to find the smallest abundant number not divisible by the first k primes.[1] If $A(k)$ represents the smallest abundant number not divisible by the first k primes then for all $\epsilon>0$ we have: $(1-\epsilon)(k\ln k)^{2-\epsilon}<\ln A(k)<(1+\epsilon)(k\ln k)^{2+\epsilon}$ for k sufficiently large. • Infinitely many even and odd abundant numbers exist. Marc Deléglise showed in 1998 that the natural density of the set of abundant numbers and perfect numbers is between 0.2474 and 0.2480.[2] • Every proper multiple of a perfect number, and every multiple of an abundant number, is abundant. • Every integer greater than 20161 can be written as the sum of two abundant numbers.[3] • An abundant number which is not a semiperfect number is called a weird number; an abundant number with abundance 1 is called a quasiperfect number, however, none have yet been found. ## Related concepts Closely related to abundant numbers are perfect numbers, that is numbers the sum of whose proper factors equals the number itself (such as 6 and 28) (or more formally, σ(n) = 2n), and deficient numbers, or numbers the sum of whose proper factors is less than the number itself (or σ(n) < 2n.) The natural numbers were first classified as either deficient, perfect or abundant by Nicomachus in his Introductio Arithmetica (circa 100) who described abundant numbers as like deformed animals with too many limbs. The abundancy index of n is the ratio σ(n)/n.[4] The sequence (ak) of least numbers n such that σ(n) > kn, in which a2 = 12 corresponds to the first abundant number, grows extremely quickly (sequence in OEIS). If p = (p1,...,pn) is a list of primes, then p is termed abundant if some integer composed only of primes in p is abundant. A necessary and sufficient condition for this is that the product of pi/(pi-1) be at least 2.[5] ## References 1. D. Iannucci (2005), "On the smallest abundant number not divisible by the first k primes", Bulletin of the Belgian Mathematical Society 12 (1): 39–44 2. M. Deléglise (1998). "Bounds for the density of abundant integers". Experimental Mathematics 7 (2): 137–143. MR 1677091. 3. "Sloane's A048242 : Numbers that are not the sum of two abundant numbers", . OEIS Foundation. 4. Laatsch, Richard (1986). "Measuring the abundancy of integers". 59 (2): 84–92. ISSN 0025-570X. JSTOR 2690424. MR 0835144. Zbl 0601.10003. 5. Friedman, Charles N. (1993). "Sums of divisors and Egyptian fractions". 44 (3): 328–339. doi:10.1006/jnth.1993.1057. MR 1233293. Zbl 0781.11015.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8504882454872131, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/86877-show-qualities-related-law.html
# Thread: 1. ## Show that the qualities are related by the law? I am studying analytical methods as part of my course and i have just enjoyed Pascals triangle and got my head round binomial theorem :-) Can any one help me with the question below a bit lost!! Atmospheric pressure(p) is measured at varying altitudes (h) and the results are shown below (h) meters (p) cm 500 73.39 1500 68.42 3000 61.6 5000 53.56 8000 43.41 Show that the quantities are related by the law p=ae kh Where a and k are constants. Determine the values of a and k and state the law. Find the atmospheric pressure at 10,000m many many thanks folks!! ;-) 2. Originally Posted by adonis1234 I am studying analytical methods as part of my course and i have just enjoyed Pascals triangle and got my head round binomial theorem :-) Can any one help me with the question below a bit lost!! Atmospheric pressure(p) is measured at varying altitudes (h) and the results are shown below (h) meters (p) cm 500 73.39 1500 68.42 3000 61.6 5000 53.56 8000 43.41 Show that the quantities are related by the law p=ae kh Where a and k are constants. Determine the values of a and k and state the law. Find the atmospheric pressure at 10,000m many many thanks folks!! ;-) This doesn't have anything to do with "Pascal's Triangle" or "Binomial Theorem". Perhaps that is what was throwing you off! You want to show that the data fit $P= ae^{kh}$ Okay, if that is true then for the first set, P= 73.39 and h= 500 so $73.39= ae^{500k}$ and for the last, P= 43.41 and h= 8000 so 43.41= ae^{8000k}. That gives two equations for a and k. We can eliminate a by dividing one equation by the other. $\frac{73.39}{43.41}= \frac{e^{500k}}{e^{8000k}}= e^{(500-8000)k}= e^{-7500k}$ or $e^{-7500k}= 1.6906$. Solve that by taking the natural logarithm of both sides, then put back into your first equation. After you have found k, put it back into either of the original equations to solve for a. Then check the other values on your list to see if the satisfy that equation, at least approximately. I chose the two ends of the list for P and h to find a and k because they are the extremes and tend to give a better approximation to the whole list than numbers close together.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415191411972046, "perplexity_flag": "middle"}
http://alanrendall.wordpress.com/2009/07/24/metabolic-networks/
# Hydrobates A mathematician thinks aloud ## Metabolic networks The processes of life involve systems of coupled chemical reactions. Often these are assumed to be homogeneous in space so that the concentrations of the substances involved are functions of time alone. The dynamics of these quantities are described by a system of ODE satisfied by the concentrations. I will call a system of this kind a metabolic system. What I want to do here is to say what is special about metabolic systems compared to general ODE systems. I also want to say something about features of these systems which are typically studied in theoretical work. A feature of metabolic systems which should be mentioned immediately is that they are often very large and moreover depend on a large number of parameters. This makes rigorous analysis difficult and can lead to a strong temptation to move to numerical approaches. On the basis of my personal mathematical preferences I would like to see analytical work taken as far as possible. My main source of information on this topic is the book ‘Systems Biology in Practice’ by E. Klipp et. al. Consider a system of $n$ substances taking part in $r$ reactions. The system is taken to be of the form $\frac{dS}{dt}=N\nu$ where $S$ is the vector of concentrations taking values in ${\bf R}^n$, $\nu$ is a vector of reaction rates, taking values in ${\bf R}^r$ and $N$ is a constant matrix, the stoichiometric matrix. It is $n$ by $r$. It contains information about how many molecules of each type are consumed or produced in each reaction. The whole system depends on $m$ parameters which form a vector in ${\bf R}^m$. Notice that any system of ODE can be put into this form – simply take $n=r$ and $N$ to be equal to the identity. One obviously interesting question about the system is how many stationary solutions it admits. Actually what is of interest is steady state solutions where $\nu$ is not identically zero. (Call these non-trivial.) The concentrations of all chemicals should be independent of time but there should be non-zero reaction rates so that individual reactions are converting certain chemicals into others. Non-trivial steady states are only possible if the rank of $N$ is less than $r$. In general the possible reaction rates form a vector space of dimension $r$ minus the rank of $N$. Notice that the definition of ‘non-trivial’ here does not only use information about the ODE system – it also uses information about a particular splitting of the right hand side into two factors. Metabolic control analysis is an attempt to understand which changes in a metabolic system result in which changes in particular features of the solutions. The quantity $\nu$ depends on the variables $S$ and $p$. For certain purposes it may be helpful to consider the derivatives of $\nu$ with respect to these variables. In terms of components this means considering the partial derivatives $\frac{\partial\nu_i}{\partial S_j}$ or $\frac{\partial\nu_i}{\partial p_j}$. In fact it is common to consider normalized quantities such as $\frac{S_j}{\nu_i}\frac{\partial\nu_i}{\partial S_j}$, which is possible as long as the denominators do not vanish. The resulting quantities are called elasticities ($\epsilon$-elasticty for $S$ and $\pi$-elasticity for $p$). These relative quantities seem to require giving up the picture of certain quantities as vectors. Maybe they should be thought of as bunches of scalars or as points of a manifold admitting certain preferred transformations. A different type of coefficients are known as control coefficients. They are defined in terms of steady state solutions of the system. Steady states can be changed by changing parameters in the system. It seems to me that the definition of the control coefficients requires being able to define the rates of change of some aspect of the steady state solution (e.g. one of the concentrations) with respect to another.This appears to include the implicit requirement that the steady states are locally isolated for given values of the parameters. This means that both quantities of interest are functions of a common quantity (the parameter being varied) so that we have a chance to define the derivative of one with respect to the other. There are certain types of nonlinearities which are typically used in modelling metabolic processes. The simplest is the mass action form. Here if $p$ molecules of a species with concentration $S_1$ and $q$ molecules of a species with concentration $S_2$ are the inputs for a certain reaction then the reaction rate is taken to be proportional to $S_1^pS_2^q$. This has a simple intuitive interpretation in terms of the probability that the necessary molecules meet. The other common type of nonlinearity results from the mass action form by a Michaelis-Menten reduction, a procedure described in a previous post. This leads to a non-polynomial nonlinearity but has the important advantage of reducing the number of equations in the system. ### Like this: This entry was posted on July 24, 2009 at 8:07 am and is filed under dynamical systems, mathematical biology. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 3 Responses to “Metabolic networks” 1. Jonathan Vos Post Says: July 25, 2009 at 5:53 am | Reply The Evolution of Controllability in Enzyme System Dynamics http://www.interjournal.org/manuscript_abstract.php?1152005238 Abstract: A building block of all living organisms’ metabolism is the “enzyme chain.” A chemical “substrate” diffuses into the (open) system. A first enzyme transforms it into a first intermediate metabolite. A second enzyme transforms the first intermediate into a second intermediate metabolite. Eventually, an Nth intermediate, the “product” diffuses out of the open system. What we most often see in nature is that the behavior of the first enzyme is regulated by a feedback loop sensitive to the concentration of product. This is accomplished by the first enzyme in the chain being “allosteric”, with one active site for binding with the substrate, and a second active site for binding with the product. Normally, as the concentration of product increases, the catalytic efficiency of the first enzyme is decreased (inhibited). To anthropomorphize, when the enzyme chain is making too much product for the organism’s good, the first enzyme in the chain is told: “whoa, slow down there.” Such feedback can lead to oscillation, or, as this author first pointed out, “nonperiodic oscillation” (for which, at the time, the term “chaos” had not yet been introduced). But why that single feedback loop, known as “endproduct inhibition” [Umbarger, 1956], and not other possible control systems? What exactly is evolution doing, in adapting systems to do complex things with control of flux (flux meaning the mass of chemicals flowing through the open system in unit time)? This publication emphasizes the results of Kacser and the results of Savageau, in the context of this author’s theory. Other publications by this author [Post, 9 refs] explain the context and literature on the dynamic behavior of enzyme system kinetics in living metabolisms; the use of interactive computer simulations to analyze such behavior; the emergent behaviors “at the edge of chaos”; the mathematical solution in the neighborhood of steady state of previously unsolved systems of nonlinear Michaelis-Menton equations [Michaelis-Menten, 1913]; and a deep reason for those solutions in terms of Krohn-Rhodes Decomposition of the Semigroup of Differential Operators of the systems of nonlinear Michaelis-Menton equations. Living organisms are not test tubes in which are chemical reactions have reached equilibrium. They are made of cells, each cell of which is an “open system” in which energy, entropy, and certain molecules can pass through cell membranes. Due to conservation of mass, the rate of stuff going in (averaged over time) equals the rate of stuff going out. That rate is called “flux.” If what comes into the open system varies as a function of time, what is inside the system varies as a function of time, and what leaves the system varies as a function of time. Post’s related publications provide a general solution to the relationship between the input function of time and the output function of time, in the neighborhood of steady state. But the behavior of the open system, in its complexity, can also be analyzed in terms of mathematical Control Theory. This leads immediately to questions of “Control of Flux.” 2. Hopf bifurcations and Lyapunov numbers « Hydrobates Says: January 10, 2010 at 1:15 pm | Reply [...] There are very many applications where the Hopf bifurcation plays a role. A first example is the Brusselator mentioned above. This is a schematic two-dimensional model for a chemical reactor. When I hear the name I get a mental picture of Brussels sprouts. This is of course nonsense. The name comes from the fact that the model was developed in Brussels and is a simplification of a three-dimensional model called the Oregonator which was developed in Oregon. The latter name was influenced by the fact that it is a kind of oscillator. The Oregonator is nothing other then the Field-Noyes model discussed in a recent post. As mentioned there the Field-Noyes model also exhibits Hopf bifurcations. Hopf bifurcations occur in the FitzHugh-Nagumo and Hogdkin-Huxley systems. Thus they are potentially relevant for electrical signalling by neurons. They may also come up in another kind of biological signalling, namely that by calcium. For an extensive review of this subject I refer to a paper of Martin Falcke (Adv. Phys. 53, 255). In section 5 of that paper the author discusses experimental evidence indicating that certain calcium oscillations cannot be modelled using Hopf bifurcations and that it might be better to use other types of bifurcation. On the other hand he suggests that the evidence for this is not conclusive. Oscillations in glycolysis are modelled by the Higgins-Selkov oscillator, a two-dimensional system bearing a superficial resemblance to the Brusselator. The unknowns are the concentrations of ADP and the enzyme phosphofructokinase. This simple system describing a part of glycolysis exhibits a Hopf bifurcation. More information on this and related systems can be found in the book of Klipp et. al. on systems biology quoted in a previous post. [...] 3. Chemical reaction network theory « Hydrobates Says: October 6, 2010 at 8:33 am | Reply [...] often include complicated networks of chemical reactions. I have made some comments on this in a previous post. From a mathematical point of view this leads to large systems of ordinary differential equations [...] %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 35, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9437437057495117, "perplexity_flag": "head"}
http://mathoverflow.net/questions/110810?sort=votes
## Dual (/reduction?) graph of a curve ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This might be a bit of a broad question, or maybe even questions. Recently I have learned about the connection between algebraic geometry and graph theory, via the dual graph of a curve. I have also seen people call it the reduction graph. I find it really fascinating that on both sides there is a notion of divisors, and for example the statement of Riemann-Roch is identical (up to notation). Further, both sides seem to have genera, Picard groups and other notions that I only knew of in algebraic geometry before. I even saw a kind of Riemann-Hurwitz formula for graphs. In short I would like to know if there exists an article or book that introduces this connection for someone who knows both fields at graduate level, but close to nothing about the connection between them. Specifically I would like to have answers to these questions (an answer or a reference is perfect): • What is the general/natural setting of this connection? I.e., what are necessary and/or sufficient conditions on the curve to have a "nice connection"? (Where "nice connection" is purposely vague.) Assuming that our curve satisfies the above conditions: • What invariants carry over? I have read that the genus of the curve is also the genus of the dual graph. How about the Picard group and/or other notions that we have on both sides of the connection? • Is this connection functorial for a suitable category of suitable curves? I realize that these questions are not very specific (of which the fact that this question does not contain any LaTeX markup might be a witness), but I hope that someone can give me pointers to introductory material. All articles that Google supplies to me seem to assume that the reader is already familiar with quite a few facts about this connection between curves and dual graphs. - 2 Reduction graphs arise naturally in the reduction theory of a curve. More precisely, let $X$ be a curve over a number field $K$. (It's more natural to consider a local field $K$.) Let $O_K$ be the ring of integers of $K$. Then, if $\mathcal{X}$ is a model of $X$ over $O_K$, you can consider the "reduction" of $\mathcal{X}$ at a finite place $v$ of $O_K$, i.e, the geometric fibre of $\mathcal X\to$ Spec $O_K$ over $v$. The "reduction" behaviour of your curve $X$ at the place $v$ can be tautologically read off its "reduction graph". See Chapter 10.1.4 of Liu's book for more details. – Ariyan Javanpeykar Oct 27 at 17:31 @Ariyan Thanks for the reference. I inderstood the definition, but I wondered if there were any 'natural' conditions that one usually puts on the curve $X/K$, or the model `$\mathcal{X}/O_{K}$`. Finiteness conditions (which one?) or properness (probably not for the model), regularness, etc... But, I will first take a look at [§10, Liu]. – Johan Commelin Oct 28 at 21:27 1 @Johan. I posted a long comment as an answer below. The curves we consider are in general smooth projective and geometrically connected over a number field (or local field) $K$. The theory becomes nice if the genus of $X$ is positive. There are different "natural" models that one can consider for $X$ over $O_K$. All of these give different reduction graphs. – Ariyan Javanpeykar Oct 29 at 8:11 ## 1 Answer I'm not sure if this should be posted as an answer, but it became too long to post as a comment. Let $X$ be a smooth projective geometrically connected curve of positive genus over a number field $K$. (The condition on the genus will be used below.) Let me explain some elements of the theory of models for $X$ over $O_K$. Firstly, there exists a model of $X$ over $O_K$. In fact, there is a closed immersion $X\to\mathbf{P}^n_{K}$. The Zariski closure of $X$ in $\mathbf{P}^n_{O_K}$ via $X\to \mathbf{P}^n_K \subset \mathbf{P}^n_{O_K}$ gives a model for $X$ over $\mathcal{O}_K$. It is a projective model. It is irreducible and reduced as a scheme. Normalizing this scheme gives a normal (projective) model. Now, you can use "resolution of singularities", e.g., Lipman's theorem, to obtain a regular model for $X$ over $O_K$. Then, subsequently contracting all the $-1$-curves (also called exceptional curves) on this regular model you will obtain a regular projective model $\mathcal{X}_{min}$ for $X$ over $O_K$ which is "minimal". We call $\mathcal{X}_{min}$ the minimal regular model of $X$ over $O_K$. It is this minimality condition which is quite natural (for curves of positive genus). Before continuing, let me give some references for the above paragraph. You can find them all in Liu's book. For basic facts on the fibres of a model for $X$ over $O_K$ see Chapter 8.3.1. Desingularization of a normal model is explained in Chapter 8.3.4. For example, a precise statement of Lipman's theorem can be found in Theorem 8.3.44. Exceptional divisors on a model are defined in Definition 9.3.1. There are two notions of minimality (Definition 9.3.12) which are shown to coincide in Corollary 9.3.24 (when the generic fibre has positive genus). Finally, the existence of the minimal regular model is obtained in Therem 9.3.21. To summarize, the minimal regular model exists and is unique. Thus, it is "natural" to look at the dual reduction graphs of this model. The (geometric) fibres of $\mathcal{X}_{min}$ are, in general, very complicated. Of course, by Proposition 8.3.11, almost all of them are smooth. But, the geometric fibre over a "bad" place will be a singular curve with "complicated" singularities. This brings us to semi-stable reduction. In fact, if your reduction isn't a smooth curve, you could hope for the next best thing: semi-stability. A curve over an algebraically closed field is semi-stable if it is connected, reduced and has only ordinary double singularities. The model $\mathcal{X}_{min}$ doesn't have semi-stable geometric fibres in general, but a deep theorem of Grothendieck and Mumford states that there exists a finite field extension $L/K$ such that the minimal regular model of $X_L$ over $O_L$ is semi-stable, i.e., its geometric fibres are semi-stable. Considering the reduction graph of this model is also very natural; see Definition 10.3.17 in Liu's book. (Caution: just because the singularities of the geometric fibres have "easy" singularities, doesn't mean the configuration of the irreducible fibres is easy. In fact, determining the configuration is a difficult problem in arithmetic geometry. For example, the semi-stable reduction of the modular curve $X_0(n)$ (for all $n$) has only been achieved recently by Jared Weinstein; see http://arxiv.org/abs/1010.4241 .) There is also the notion of stability. This is stronger than semi-stability. The stable reduction of a curve is also "natural" to consider. I'll finish with a quick note on elliptic curves. Let $E/K$ be an elliptic curve. Then you can consider the minimal regular model and, for some suitable $L/K$, the semi-stable reduction of $E_L$ over $O_L$. It is also natural to ask whether $E$ has a model over $O_K$ which extends the group structure and the smoothness property of $E/K$. Such a model doesn't exist in general if we demand properness. If we drop the properness condition, then such a model exists. (The finiteness conditions being as usual: the model is of finite type and separated over the base scheme Spec $O_K$.) You can then ask for a model which extends this group structure in the "best possible" way. This brings us to Neron models. Such a model for $E$ over $O_K$ always exists. In fact, you can show that the smooth locus `$\mathcal{E}_{min}^{sm}$` of the minimal regular model $\mathcal E_{min}$ of $E$ over $O_K$ is the Neron model of $E$ over $O_K$; see Theorem 10.2 for a discussion of this beautiful theory. - @Ariyan Thank you very much for this 'comment'! I already new about the Néron model for elliptic curves, but you really helped me with the minimal regular model. I hope to get enough feeling for these configurations to be able to understand why for example the genus of the reduction graph is the same as the genus of the curve. – Johan Commelin Oct 29 at 11:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415788650512695, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Sample_size_determination
# Sample size determination Sample size determination is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is determined based on the expense of data collection, and the need to have sufficient statistical power. In complicated studies there may be several different sample sizes involved in the study: for example, in a survey sampling involving stratified sampling there would be different sample sizes for each population. In a census, data are collected on the entire population, hence the sample size is equal to the population size. In experimental design, where a study may be divided into different treatment groups, there may be different sample sizes for each group. Sample sizes may be chosen in several different ways: • expedience - For example, include those items readily available or convenient to collect. A choice of small sample sizes, though sometimes necessary, can result in wide confidence intervals or risks of errors in statistical hypothesis testing. • using a target variance for an estimate to be derived from the sample eventually obtained • using a target for the power of a statistical test to be applied once the sample is collected. How samples are collected is discussed in sampling (statistics) and survey data collection. ## Introduction Larger sample sizes generally lead to increased precision when estimating unknown parameters. For example, if we wish to know the proportion of a certain species of fish that is infected with a pathogen, we would generally have a more accurate estimate of this proportion if we sampled and examined 200, rather than 100 fish. Several fundamental facts of mathematical statistics describe this phenomenon, including the law of large numbers and the central limit theorem. In some situations, the increase in accuracy for larger sample sizes is minimal, or even non-existent. This can result from the presence of systematic errors or strong dependence in the data, or if the data follow a heavy-tailed distribution. Sample sizes are judged based on the quality of the resulting estimates. For example, if a proportion is being estimated, one may wish to have the 95% confidence interval be less than 0.06 units wide. Alternatively, sample size may be assessed based on the power of a hypothesis test. For example, if we are comparing the support for a certain political candidate among women with the support for that candidate among men, we may wish to have 80% power to detect a difference in the support levels of 0.04 units. ## Estimating proportions and means A relatively simple situation is estimation of a proportion. For example, we may wish to estimate the proportion of residents in a community who are at least 65 years old. The estimator of a proportion is $\hat p = X/n$, where X is the number of 'positive' observations (e.g. the number of people out of the n sampled people who are at least 65 years old). When the observations are independent, this estimator has a (scaled) binomial distribution (and is also the sample mean of data from a Bernoulli distribution). The maximum variance of this distribution is 0.25/n, which occurs when the true parameter is p = 0.5. In practice, since p is unknown, the maximum variance is often used for sample size assessments. For sufficiently large n, the distribution of $\hat{p}$ will be closely approximated by a normal distribution with the same mean and variance.[1] Using this approximation, it can be shown that around 95% of this distribution's probability lies within 2 standard deviations of the mean. Because of this, an interval of the form $(\hat p -2\sqrt{0.25/n}, \hat p +2\sqrt{0.25/n})$ will form a 95% confidence interval for the true proportion. If this interval needs to be no more than W units wide, the equation $4\sqrt{0.25/n} = W$ can be solved for n, yielding[2][3] n = 4/W2 = 1/B2 where B is the error bound on the estimate, i.e., the estimate is usually given as within ± B. So, for B = 10% one requires n = 100, for B = 5% one needs n = 400, for B = 3% the requirement approximates to n = 1000, while for B = 1% a sample size of n = 10000 is required. These numbers are quoted often in news reports of opinion polls and other sample surveys. ### Estimation of means A proportion is a special case of a mean. When estimating the population mean using an independent and identically distributed (iid) sample of size n, where each data value has variance σ2, the standard error of the sample mean is: $\sigma/\sqrt{n}.$ This expression describes quantitatively how the estimate becomes more precise as the sample size increases. Using the central limit theorem to justify approximating the sample mean with a normal distribution yields an approximate 95% confidence interval of the form $(\bar x - 2\sigma/\sqrt{n},\bar x + 2\sigma/\sqrt{n}).$ If we wish to have a confidence interval that is W units in width, we would solve $4\sigma/\sqrt{n} = W$ for n, yielding the sample size n = 16σ2/W2. For example, if we are interested in estimating the amount by which a drug lowers a subject's blood pressure with a confidence interval that is six units wide, and we know that the standard deviation of blood pressure in the population is 15, then the required sample size is 100. ## Required sample sizes for hypothesis tests A common problem faced by the statisticians is calculating the sample size required to yield a certain power for a test, given a predetermined Type I error rate α. As follows, this can be estimated by pre-determined tables for certain values, by Mead's resource equation, or, more generally, by the cumulative distribution function: ### By tables [4] Power Cohen's d 0.2 0.5 0.8 0.25 84 14 6 0.50 193 32 13 0.60 246 40 16 0.70 310 50 20 0.80 393 64 26 0.90 526 85 34 0.95 651 105 42 0.99 920 148 58 The table shown at right can be used in a two-sample t-test to estimate the sample sizes of an experimental group and a control group that are of equal size, that is, the total number of individuals in the trial is twice that of the number given, and the desired significance level is 0.05.[4] The parameters used are: • The desired statistical power of the trial, shown in column to the left. • Cohen's d (=effect size), which is the expected difference between the means of the target values between the experimental group and the control group, divided by the expected standard deviation. ### Mead's resource equation Mead's resource equation is often used for estimating sample sizes of laboratory animals, as well as in many other laboratory experiments. It may not be as accurate as using other methods in estimating sample size, but gives a hint of what is the appropriate sample size where parameters such as expected standard deviations or expected differences in values between groups are unknown or very hard to estimate.[5] All the parameters in the equation are in fact the degrees of freedom of the number of their concepts, and hence, their numbers are subtracted by 1 before insertion into the equation. The equation is:[5] $E = N - B - T,$ where: • N is the total number of individuals or units in the study (minus 1) • B is the blocking component, representing environmental effects allowed for in the design (minus 1) • T is the treatment component, corresponding to the number of treatment groups (including control group) being used, or the number of questions being asked (minus 1) • E is the degrees of freedom of the error component, and should be somewhere between 10 and 20. For example, if a study using laboratory animals is planned with four treatment groups (T=3), with eight animals per group, making 32 animals total (N=31), without any further stratification (B=0), then E would equal 28, which is above the cutoff of 20, indicating that sample size may be a bit too large, and six animals per group might be more appropriate.[6] ### By cumulative distribution function Let Xi, i = 1, 2, ..., n be independent observations taken from a normal distribution with unknown mean μ and known variance σ2. Let us consider two hypotheses, a null hypothesis: $H_0:\mu=0$ and an alternative hypothesis: $H_a:\mu=\mu^*$ for some 'smallest significant difference' μ* >0. This is the smallest value for which we care about observing a difference. Now, if we wish to (1) reject H0 with a probability of at least 1-β when Ha is true (i.e. a power of 1-β), and (2) reject H0 with probability α when H0 is true, then we need the following: If zα is the upper α percentage point of the standard normal distribution, then $\Pr(\bar x >z_{\alpha}\sigma/\sqrt{n}|H_0 \text{ true})=\alpha$ and so 'Reject H0 if our sample average ($\bar x$) is more than $z_{\alpha}\sigma/\sqrt{n}$' is a decision rule which satisfies (2). (Note, this is a 1-tailed test) Now we wish for this to happen with a probability at least 1-β when Ha is true. In this case, our sample average will come from a Normal distribution with mean μ*. Therefore we require $\Pr(\bar x >z_{\alpha}\sigma/\sqrt{n}|H_a \text{ true})\geq 1-\beta$ Through careful manipulation, this can be shown to happen when $n \geq \left(\frac{z_{\alpha}-\Phi^{-1}(1-\beta)}{\mu^{*}/\sigma}\right)^2$ where $\Phi$ is the normal cumulative distribution function. ## Stratified sample size With more complicated sampling techniques, such as stratified sampling, the sample can often be split up into sub-samples. Typically, if there are k such sub-samples (from k different strata) then each of them will have a sample size ni, i = 1, 2, ..., k. These ni must conform to the rule that n1 + n2 + ... + nk = n (i.e. that the total sample size is given by the sum of the sub-sample sizes). Selecting these ni optimally can be done in various ways, using (for example) Neyman's optimal allocation. There are many reasons to use stratified sampling:[7] to decrease variances of sample estimates, to use partly non-random methods, or to study strata individually. A useful, partly non-random method would be to sample individuals where easily accessible, but, where not, sample clusters to save travel costs.[citation needed] In general, for H strata, a weighted sample mean is $\bar x_w = \sum_{h=1}^H W_h \bar x_h,$ with $\operatorname{Var}(\bar x_w) = \sum_{h=1}^H W_h^2 \,\operatorname{Var}(\bar x_h).$[8] The weights, W(h), frequently, but not always, represent the proportions of the population elements in the strata, and W(h)=N(h)/N. For a fixed sample size, that is $N = \sum{N(h)}$, $\operatorname{Var}(\bar x_w) = \sum_{h=1}^H W_h^2 \,\operatorname{Var}(h) \left(\frac{1}{n_h} - \frac{1}{N_h}\right),$[9] which can be made a minimum if the sampling rate within each stratum is made proportional to the standard deviation within each stratum: $n_h/N_h=k S_h$. An "optimum allocation" is reached when the sampling rates within the strata are made directly proportional to the standard deviations within the strata and inversely proportional to the square roots of the costs per element within the strata: $\frac{n(h)}{N(h)} = \frac{K S(h)}{\sqrt{C(h)}},$[10] or, more generally, when $n(h) = \frac{K' W(h) S(h)}{\sqrt{C(h)}}.$[11] ## See also • Degrees of freedom (statistics) • Design of experiments • Replication (statistics) • Sampling (statistics) • Statistical power • Stratified Sampling • Engineering response surface example under Stepwise regression. ## Notes 1. NIST/SEMATECH, "7.2.4.2. Sample sizes required", e-Handbook of Statistical Methods. 2. ^ a b Chapter 13, page 215, in: Kenny, David A. (1987). Statistics for the social and behavioral sciences. Boston: Little, Brown. ISBN 0-316-48915-8. 3. ^ a b Kirkwood, James; Robert Hubrecht (2010). The UFAW Handbook on the Care and Management of Laboratory and Other Research Animals. Wiley-Blackwell. p. 29. ISBN 1-4051-7523-0.  online Page 29 4. Kish (1965, Section 3.1) 5. Kish (1965), p.78. 6. Kish (1965), p.81. 7. Kish (1965), p.93. 8. Kish (1965), p.94. ## References • Bartlett, J. E., II, Kotrlik, J. W., & Higgins, C. (2001). "Organizational research: Determining appropriate sample size for survey research", Information Technology, Learning, and Performance Journal, 19(1) 43-50. • Kish, L. (1965), Survey Sampling, Wiley. ISBN 0-471-48900-X ## Further reading • NIST: Selecting Sample Sizes • Raven Analytics: Sample Size Calculations • ASTM E122-07: Standard Practice for Calculating Sample Size to Estimate, With Specified Precision, the Average for a Characteristic of a Lot or Process
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8874846696853638, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/162088/a-linear-differential-equation?answertab=active
# A linear differential equation Find the general solution to: $$x'' + 2c \; x' + \left( \frac{2}{\cosh^2 t} - 1\right)x =0$$ where $c$ is a constant. - In order to get the best possible answers, it is helpful if you say what your thoughts on the problem are so far; this will prevent people from telling you things you already know, and help them write their answers at an appropriate level. Also, people are much happier to help those who demonstrate that they've tried the problem themselves first. – Zev Chonoles♦ Jun 23 '12 at 18:15 1 @ZevChonoles I know this poem, as well as a solution. I though some of math.SE users would find this problem interesting :| – qoqosz Jun 23 '12 at 18:16 1 The fact that you know a solution is definitely something that ought to be mentioned. Why not post it as an answer to your question, so that others don't waste their time explaining that solution to you? If someone knows a different method of solving the problem they can still post it. – Zev Chonoles♦ Jun 23 '12 at 18:18 – Zev Chonoles♦ Jun 23 '12 at 18:22 3 There's no need to delete - and I apologize if was coming across a bit harsh. I just feel that the origin of a problem is an important part of what should go into a post - whether it's homework, research, for fun, or anything else, one should explain where the problem is coming from. As long as that information is included, I'd love to keep this question open so that people can try their hand at it. – Zev Chonoles♦ Jun 23 '12 at 18:36 show 5 more comments ## 1 Answer When $c=0$ we can guess that $x_1 = \frac{1}{\cosh t}$ is a solution. Then by inserting $x = x_1 \cdot u$ into equation it simplifies to: $$u'' - 2 \tanh t \;u' = 0$$ From which we conclude that: $x(t) = \frac{C_1}{\cosh t} + C_2 \left(\sinh t + \frac{t}{\cosh t} \right)$. For more general case when $c \neq 0$ we use substitution $x = w\,e^{-ct}$. The equation transforms to: $$w'' = \left(1 + c^2 - \frac{2}{\cosh^2 t} \right)w \quad \quad (1)$$ the trick is to use Darboux theorem. Let's say we have two functions $y$ and $z$ that are solutions to: $$y'' = p(t)y, \quad z'' = \left(p(t) - \lambda \right)z$$ then the function: $$\overline{y} = y' - \frac{yz'}{z}$$ satisfies: $$\overline{y}'' = \left( p - 2 \frac{d}{dt} \left(\frac{z'}{z} \right) \right)\overline{y} \quad\quad(2)$$ Taking $z=\cosh t$ and $p=1+c^2$ equation $(2)$ turns out to be $(1)$. We solve: $$y'' = (1+c^2) y \Rightarrow y = C_1 e^{\sqrt{1+c^2} t} + C_2 e^{-\sqrt{1+c^2} t}$$ then we plug in for $\overline{y} = w$ and $x$ which yields the final result: $$x = C_1 \left(\sqrt{1+c^2} - \tanh t \right)e^{\sqrt{1+c^2} t - ct} + C_2 \left(\sqrt{1+c^2} + \tanh t \right)e^{-\sqrt{1+c^2} t - ct}$$ - Amazing!! Thank you! – rubik Jun 25 '12 at 16:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9657460451126099, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/138243/how-to-prove-coshx-ge-1-without-the-cosh2x-sinh2x-1-identity/138263
How to prove $\cosh(x) \ge 1$ without the $\cosh^2x-\sinh^2x=1$ identity I think it's pretty easy question, maybe even dumb one, but still I can't find a nice way to solve it. How do you prove that $\forall x . \cosh(x) \ge 1$ , without using the identity: $\cosh^2x-\sinh^2x=1$, and not without using derivatives. the defenition of $\cosh(x)$ is: $\frac{e^x+e^{-x} }{2}$ I already proved that using the identity, but I'm just wondering if there is another way. sadly, I couldn't find a proof using the search engine, even though that is a very basic question. would appreciate your help! - 2 What is the smallest x + 1/x can get? Definition of Cosh? – Chris K. Caldwell Apr 28 '12 at 22:42 How you prove this (and, really, whether you have proved this) depends sensitively on how you define $\cosh$ and what you assume known about the things you have used to define $\cosh$. What is your definition of $\cosh$, and what do you assume known? – leslie townes Apr 28 '12 at 22:43 updated the question with the defenition. – RB14 Apr 28 '12 at 22:50 3 Answers You have by definition that $$\cosh x=\frac12(e^x+e^{-x})\;,$$ so it suffices to show that $e^x+e^{-x}\ge 2$. But this is an easy consequence of the fact that for any positive real number $u$, $u+u^{-1}\ge 2$. To see this, note that the desired inequality is equivalent to $$\frac{u^2+1}u\ge 2$$ and hence to $u^2+1\ge 2u$, or $u^2-2u+1\ge 0$. But this is clearly true, since $$u^2-2u+1=(u-1)^2\;.$$ Reorganizing in logical order: for $u\ne 0$, $$\begin{align*} u^2-2u+1=(u-1)^2\ge 0&\implies u^2+1\ge 2u\\ &\implies\frac{u^2+1}u\ge 2\\ &\implies u+\frac1u\ge 2\;, \end{align*}$$ and in particular this holds when $u=\cosh x$ for any real $x$, since it’s clear from the definition that $\cosh x>0$ for all $x$. - lovely, I knew it must be an easy way! thanks a lot! – RB14 Apr 28 '12 at 22:54 @RB14 If you know the AM-GM inequality, your inequality $u + u^{-1} \geq 2$ comes almost immediately from this. – BenjaLim Apr 29 '12 at 0:04 actually, in second reading, you did make some mistake here. you said that the inequality $u+u^{-1}\ge 2$ is true for any non-zero real number $u$, but actually it's not true for negative numbers, so it's true only for any real number greater than 0. but it's still good for proving the expression since $e^x > 0$. – RB14 Apr 29 '12 at 14:08 @RB14: Yes, that was a slip. Thanks. It’s fixed now. – Brian M. Scott Apr 29 '12 at 14:19 Having $\small e^x=1+x+x^2/2!+x^3/3! +x^4/4! +...$ we have also $$\small \cosh(x)={(e^x+e^{-x})\over 2}=1 + x^2/2! + x^4/4! + \ldots \ge 1$$ for all real x just by adding the formal representation of the series for x and for -x termwise, where the terms at odd exponents of x vanish because of the alternating sign. - 1 Even easier: one can start with the basic inequality $\exp\,x \geq 1+x$. – J. M. Apr 29 '12 at 0:57 By definition, $\cosh x=\frac{e^x+e^{-x}}{2}$. Taking the derivative, we get $\frac{e^x-e^{-x}}{2}$, which is $0$ when $x=0$, positive when $x>0$ and negative when $x<0$. So $x=0$ is a the function's minimum, and $\cosh 0=1$. - Thanks for your answer, but I forgot to mention that I prefer not to use derivatives, but that's a nice answer! – RB14 Apr 28 '12 at 22:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496774673461914, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/3382/stock-price-behavior-and-garch/3399
# Stock Price Behavior and GARCH In my (limited) understanding, the behavior of a stock price can be modeled using Geometric Brownian Motion (GBM). According to the Hull book I'm currently reading, the discrete-time version of this model is as follows: $$\Delta S = \mu S \Delta t + \sigma S \varepsilon \sqrt{\Delta t}, \quad \varepsilon \sim N(0,1)$$. If I'm performing a Monte Carlo simulation, could I use the term structure of a GARCH model to plug in a unique value for the volatility at each time step instead of using a constant value? Is this a correct use of the GARCH model? - Generally, yes. But I'm not shure about your formulation of the geometric brownian motion, I'd say your formula describes a standard brownian motion. The formula I know and use is something along the lines of $\Delta S= S\cdot e^{\mu \Delta t + \sigma \epsilon \delta t}$. – Owe Jessen Apr 30 '12 at 8:16 1 miggety: "at each time step": Is that for each path and each time step, so maybe 100,000 GARCH recalculations per time step? Does the GARCH-result depend that much on 1 additional time step? I mean you add 1 data point each time step. @Owe Jessen miggety's GBM formula is the Euler-Maruyama discretisation of the SDE which yields an analytic solution like the one you are using, Owe. See GBM as well. – Konsta Apr 30 '12 at 10:53 – SRKX♦ Apr 30 '12 at 11:52 1 @SRKX - I don't think this community would benefit from a very narrow definition of quant. – Owe Jessen Apr 30 '12 at 13:36 @OweJessen : this meta post discussed the point you're making. Fell free to participate in Quantitative Finance Meta. As for this post, I believe it's borderline to what we can accept, so we can just wait and see how the community and the mods react to that, there is no need to do anything else. – SRKX♦ Apr 30 '12 at 14:23 show 1 more comment ## 4 Answers I'm guessing, and correct me if I'm wrong, you want to create a number of possible paths the stock price could follow with the local volatilty given by GARCH depending on the simulated history, or in pseudocode: ````N <- numberOfPaths T <- numberOfSteps for (i in 1:N) { newSeries <- pastPrices for (t in 1:T) { epsilon <- normrnd(0,1) sigma <- calculateGARCHVol(pastPrices) newSeries.append(nextPrice(epsilon, sigma)) } allSeries.append(newSeries) } ```` Yes, you can do this and this is correct usage of GARCH. - Bob, your assumption is correct. That is exactly what I was asking! Thanks! – miggety Apr 30 '12 at 14:22 Yeah but then you're not using a GBM anymore... it's just not the same. – SRKX♦ Apr 30 '12 at 14:29 Okay just to wind things down here, I think an important clarification is needed if readers might come and seek to a similar solution. The Geometric Brownian Motion (GBM) is a model of asset prices dynamics which is usually given as follows: $$dS_t = \mu S_t dt + \sigma S_t dB_t$$ where $B_t$ is a standard brownian motion which has several important characteristics, mainly that it is stationary and $B_t \sim N(0,t)$. It is very famous because it is pretty simple to find a closed form solution to $S_t$ and because it was used in the derivation of the Black-Scholes formula. What you state is a dicretization of the GBM: $$S_{t+\Delta t}-S_t= \Delta S_t = \mu S_t \Delta t + \sigma S_t \epsilon \sqrt{\Delta t}$$ You just used the property from the Brownian motion (which is a random walk) and which says that $B_{t + \Delta t} - B_t = B_{\Delta t} \sim N(0,\Delta t)$ and created a similar variable $\underbrace{\epsilon}_{N(0,1)} \sqrt{\Delta t} \sim N(0,\Delta t)$ It this model, you assume volatility is constant $\sigma$. Now GARCH(p,q) is different: the model tries to find a way to express the return at time $t+1$ given the the $q$ last returns. And it assumes that each new return is $$r_t=a_0 + \sum_{i=0}^q a_i r_{t-i} + z_t$$ Where $z_t = \sigma_t \epsilon_t \sim N(0,\sigma_t^2)$ The local volatility is assumed in the model to be $$\sigma_t^2 = \alpha_0 + \sum_{i=1}^q \alpha_i z_{t-i} + \sum_{i=1}^q \beta_i \sigma_{t-i}^2$$ So what you do is that you choose 2 parameters $p$ and $q$, you fit the model to your historical data to get the parameters $a, \alpha, \beta$ (using a maximum likelihood estimator) and hence you can compute you estimated past $\sigma_i$ for $i \leq t$. You can then simulate different path of the stock in the future by using the equation of the models to generate the futures $\sigma_t$ and $r_t$ and it will differ on each generation because you need to generate an $\epsilon_t$ at each step. This is useful for Monte-Carlo simulation of options for example when you do not want to use the constant volatility of the discretized GBM. - I am going to supply an answer that is quite similar to SRKX's (which is very very good) because I want to discuss in more detail a few important things. First, you cannot use a stochastic volatility model for the SDE that you've provided as that's GBM with constant diffusion. However, based on what you've said it's obvious you wish to model a discretized version of the following process: $$\boxed{dS(t) = S(t)(\mu dt + \sigma(t)dW(t))}$$ where $W(t)$ is a standard scalar Brownian motion under the real-world probability measure, $\mu$ is its constant drift and $\sigma(t)$ is its volatility process which we assume to follow GARCH dynamics. Doing what we always do most of the time in finance research, I'm going to define returns as $R(t):= \ln{\frac{S(t)}{S(t-1)}}$. In the univariate setting, the mean equation of a ARCH/GARCH type model generally follows: $$\boxed{R(t) = \mu + \sum_{i=1}^N a_i R(t-i) + b \sigma(t) + \mathbf{c}\mathbf{X(t)} + \sigma(t) (W(t)-W(t-1))}$$ where $a_i$ are the AR(i) coefficients, $b$ is the coefficient of the garch-in-mean, $\mathbf{c}$ is a row vector of coefficients to some exogenous variables represented by column vector $\mathbf{X(t)}$. However, given the process that you've specified, we are restricted to a very specific mean equation: $$\boxed{R(t) = \mu + \sigma(t)(W(t)-W(t-1)) }$$ where $W(t)-W(t-1) \overset{d}{=} W(1) \sim N(0,1)$. This restriction is not automatically troubling as it is prevalent when modelling DCC-fGARCH (time-varying correlations with family GARCH). However most studies that look to things such as bivariate variance spillovers (e.g., contagion research) or that are doing a study where univariate GARCH is involed (e.g., exchange rate exposure research where market returns and exchange rate returns are exogenous variables in the mean equation) will find this process to be inappropriate for their usage. Ignoring the empirical issues with the mean equation that follow from the proposed SDE, we can see that the mean equation representation follows from the fact that, at $\mathcal{F}_{0}$ and under real-world $\mathbb{P}$: $S(t) = e^{\mu t + \sigma(t) W(t)}$ $\therefore \ln{\frac{S(t)}{S(t-1)}} = \mu + \sigma(t) (W(t) - W(t-1))$ $\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \overset{d}{=}\mu + \sigma(t) W(1)$ Luckily, the variance equation is unconstrained, and we can use the GARCH model whose process is defined here. For a discretized econometric representation of GARCH(1,1) we have, as SRKX lays out; $$\boxed{\sigma_t^2 = \omega + \alpha (W(t-1)-W(t-2))^2\sigma(t-1)^2 + \beta \sigma(t-1)^2}$$ So you should then proceed with your plan of discretization while using the correct mean equation that I boxed. This is the default in R packages `ccgarch`, `rugarch` and `fGarch`, so it's your lucky day! - For the univariate case, consider X which is the log prices of some stock. First, fit X with an AR(p) model and collect the residuals. Next, fit a Garch(p,q) model and collect the conditional standard deviations. Scale the initial residuals by the conditional standard deviations to produce a new series that has mean of 0 and variance 1. For the sake of simplicity, assume this is normally distributed. For the simulation a generic step would look like: 1) simulate from N(0,1) and collect that in a vector, 2) create a vector that would be the result of using the Garch model above to find the conditional standard deviation in each simulation, 3) Hadamard product the N(0,1) vector by the new vector of condition standard deviations, 4) add that to what you would get from the conditional mean from the AR(p) equation, 5) take the exp() of the vector to convert back to stock prices. If you do this each period and collect them to one matrix, then you will have the distribution of the prices over time. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336698651313782, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?s=c3532470767de9361c8fcded6eceaf91&p=4152334
Physics Forums ## Electric field of a line charge with the divergence theorem Hi, on page 63 of David J. Griffiths' "Introduction to Electrodynamics" he calculates the electric field at a point z above a line charge (with a finite length L) using the electric field in integral form. $E_z = \frac{1}{4 \pi \epsilon_0} \int_{0}^{L} \frac{2 \lambda z}{\sqrt{(z^2 + x^2)^3}} dx = \frac{1}{4 \pi \epsilon_0} \frac{2 \lambda L}{z \sqrt{z^2 + L^2}}$ whereas $\lambda = \frac{Q}{L}$ Basically it's a two-dimensional system with a horizontal x-axis and a vertical z-axis, the charges go from -L to L on the x-axis and we look at the electric field a distance z above the line (i.e. on the z-axis). That's all fine and dandy but I have some serious troubles trying to reproduce that same result with the corresponding Maxwell equation in differential form and the divergence theorem. This is what I got so far: [itex] \vec{\nabla} \vec{E} = \frac{\rho}{\epsilon_0} \\ \int \vec{\nabla} \vec{E} dV = \int \frac{\rho}{\epsilon_0} dV \\ \int \vec{E} d\vec{A} = \int \frac{\lambda}{\epsilon_0} dl \\ [/itex] whereas I used $\rho dV \propto \lambda dl$ For the left side I use cylindrical coordinates and get: [itex] \vec{E} \hat{x} = \frac{1}{2 \pi \epsilon_0 x z} \int {\lambda} dl [/itex]. Since λ is constant I can pull it out of the integral and when I integrate I get as a final result: $\vec{E} \hat{x} = \frac{2 \lambda L}{4 \pi \epsilon_0 x z}$ Now this is a completely different result than what I get when I use the formula for the electric field in the integral form. One of the problems that this happens is that the integral form actually has the vector difference between the position of the charge and the point at which you want to calculate the electric field, i.e. $\int \lambda \frac{\vec{r}-\vec{r'}}{\| \vec{r} - \vec{r'}\|^3} dl$ whereas this is not the case in the Maxwell equation. What am I doing wrong? Why cannot I reproduce the same result? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Homework Help Science Advisor Quote by dipole knight For the left side I use cylindrical coordinates and get: $\vec{E} \hat{x} = \frac{1}{2 \pi \epsilon_0 x z} \int {\lambda} dl$. How? Your system has no symmetry which could be used in the integral. Quote by mfb How? Your system has no symmetry which could be used in the integral. I am not sure I understand what you mean. Though I think I have made a mistake. I wanted to integrate using the surface area for cylindrical coordinates, i.e.: $\int \vec{E} \; d\vec{A} = \int \vec{E} \; \hat{r} \; dr \; d\phi \; dx = \vec{E} \; \vec{r} \; ln(r) \; 2 \pi \cdot 2 L$ whereas r is the radius of the Gaussian cylinder I have put around the line charge. Although I just realised that this is actually the integral for the volume of a cylinder. I am not really sure how to integrate over the surface but I don't think that will solve the problem. I still don't understand why it's not possible to use the divergence theorem to solve this problem. :/ ## Electric field of a line charge with the divergence theorem It's not that you can't use the divergence theorem, as it holds no matter the case. You cannot use cylindrical symmetry to simplify the system because the E field in the system will not be cylindrically symmetric. To prove this to yourself, take the shape of the field when z>>L. Quote by dipole knight I wanted to integrate using the surface area for cylindrical coordinates, i.e.: $\int \vec{E} \; d\vec{A} = \int \vec{E} \; \hat{r} \; dr \; d\phi \; dx = \vec{E} \; \vec{r} \; ln(r) \; 2 \pi \cdot 2 L$ whereas r is the radius of the Gaussian cylinder I have put around the line charge. Although I just realised that this is actually the integral for the volume of a cylinder. I am not really sure how to integrate over the surface but I don't think that will solve the problem. Even as a volume integral, this is incorrect, as $\hat{r}$ is a unit vector, and I don't know where you are getting the ln(r) from. The area element $d\vec{A} = \vec{r} \; d\phi dx$ is proportional to and in the direction of $\vec{r}$. To get the surface area, the area element is integrated at constant radius r=R. However, $\vec{E} \cdot \hat{r}$ would be a function of r and x, so it cannot be pulled out of the surface integral. Quote by Jasso It's not that you can't use the divergence theorem, as it holds no matter the case. You cannot use cylindrical symmetry to simplify the system because the E field in the system will not be cylindrically symmetric. To prove this to yourself, take the shape of the field when z>>L. Even as a volume integral, this is incorrect, as $\hat{r}$ is a unit vector, and I don't know where you are getting the ln(r) from. The area element $d\vec{A} = \vec{r} \; d\phi dx$ is proportional to and in the direction of $\vec{r}$. To get the surface area, the area element is integrated at constant radius r=R. However, $\vec{E} \cdot \hat{r}$ would be a function of r and x, so it cannot be pulled out of the surface integral. Yes, thank you! The I actually got the ln(r) from the unit vector, since $\hat{r} = \frac{\vec{r}}{r}$ and when integrating 1/r you get ln(r). You are right though, my calculations are a mess, too many mistakes. Sorry about that. :/ At least I now know that this configuration does not permit cylindrical symmetry because of both ends of the cylinder, which seem to be the trouble makers. Something like that would work for an infinitely long cylinder, though. Thread Tools | | | | |----------------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Electric field of a line charge with the divergence theorem | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 3 | | | Introductory Physics Homework | 4 | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 14 | | | Advanced Physics Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231138229370117, "perplexity_flag": "head"}
http://mathoverflow.net/questions/122870/about-the-boundedness-of-a-multiplication-operator/122885
## About the boundedness of a multiplication operator. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let be $f$ a $2\pi-$periodic function and $\hat{f}(k)=\frac{1}{2\pi}\int_0^{2\pi}f(x)e^{-ikx}dx$. Consider the operator: \begin{equation} Tf(x)=\sum_{k\in\mathbb{Z}}sign(k)\ \hat{f}(k)\ e^{ikx}. \end{equation} I would like to know if the operator $T:L^p(0,2\pi)\rightarrow L^p(0,2\pi)$ is bounded. - 1 This is just (a scalar multiple of) the Hilbert transform on the circle. It maps Lp to itself for 1<p<∞. See: en.wikipedia.org/wiki/Hilbert_transform – Mark Lewko Feb 25 at 11:09 1 Multiplication with a bounded sequence always gives a bounded operator: en.wikipedia.org/wiki/Multiplication_operator – András Bátkai Feb 25 at 11:14 1 @András, note the operator acts by multiplying the Fourier transform by a bounded sequence (not the function itself!) – Mark Lewko Feb 25 at 11:29 @Mark: Thank you. An amateurish mistake... – András Bátkai Feb 27 at 7:51 ## 1 Answer When $p=2$, boundedness is a triviality and it is the only trivial case. It is not true for $p=1$ nor for $p=\infty$, although the Fourier multiplier $sign(D_x)$ sends $L^1$ into $L^1_w$ and the Marcinkiewicz interpolation theorem implies boundedness in $L^p$ for all $p\in]1,+\infty[$. The operator $sign(D_x)$ is is a particular case of the wider class of singular integrals, extensively studied by Calderon and Zygmund, later by Hörmander, Stein & Fefferman. They are defined via a simple condition on their kernels, easily proven $L^2$ bounded, with the property that they send $L^1$ into $L^1_w$. Again Marcinkiewicz Theorem allows to finish the job of proving boundedness in $L^p$ for all $p\in]1,+\infty[$. To give a simple class of example would be to consider Fourier multiplier $F(D_x)$ where $F$ is an homogeneous function of degree 0 which is smooth outside of the origin. Note that it works as well in any dimension and that the so-called Hörmander-Mihlin multiplier Theorem allows to weaken significantly the smoothness assumption. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8613488674163818, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/195815-probability-prizes.html
# Thread: 1. ## Probability for prizes! Hi all, Could someone help me with a problem please? I have 32 rounds and in each round there are 6 possible outcomes. Could someone please give the basic working of the probability of getting i.e. 5 out of 32 correct, or 10 out of 32 correct. Or even 5-9 out of 32, or 10-14 out of 32. Thanks Lee 2. ## Re: Probability for prizes! if only one of the 6 is correct, and you choose randomly, and the outcome of each round is independant of the other rounds, then... ...You are describing a case of the binomial distribution. With the following: probability of success (p) = 1/6 probability of failure (q) = 5/6 number of trials (n) = 32 the chance of getting exactly X guesses right is given by the formula $^nC_r \times p^x \times q^{n-x}$ if you want to work out the chance of getting a range of values (5-10) then add up the chance of getting 5,6,7,8,9,10. There are online tools that calculate these for you: Binomial Calculator
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.902250349521637, "perplexity_flag": "middle"}
http://dsp.stackexchange.com/questions/270/what-is-the-relation-between-the-psds-of-filter-input-and-output-called-r-y
# What is the relation between the PSDs of filter input and output called? $R_Y = |H|^2R_X$ If a wide-sense stationary signal X is fed to an LTI filter with the transfer function H, the power spectral density (PSD) of the output Y can be expressed as: $R_Y(f) = |H(f)|^2R_X(f)$ where $R_X$ denotes the PSD of X. Does this relation have a common name? - ## 2 Answers I don't know the name of the relationship, but $\vert H(f)\vert^2$ is called the power transfer function of the LTI system. The output power spectrum is the input power spectrum multiplied by the power transfer function, just as for deterministic signals, the output spectrum is the input spectrum multiplied by the transfer function $H(f)$. - To be extra pedantic, H(f) is the frequency response function. H(w) is the transfer function. – mtrw Sep 20 '11 at 22:44 3 @mtrw Do you have a citation to back up your pedantry? Bracewell's classical text The Fourier Transform and Its Applications calls $H(f)$ the transfer function; other texts call $H(\omega)$ or $H(j\omega)$ the transfer function as you do; yet others call $H(s)$ the transfer function. So, please provide a citation that says that calling $H(f)$ the transfer function is wrong as this name is reserved for $H(\omega)$. – Dilip Sarwate Sep 21 '11 at 1:18 1 First, I have to apologize for a stupid mistake. I should have said H(f) is the FRF, and H(s) is the transfer function. Sadly I don't have my copy of Oppenheim, Schafer & Young's Signals and Systems anymore, which is where I remember learning this. The mnemonic I was taught was that the Fourier transform of the impulse response (either H(f) or H(jw)), since it's evaluated for pure sinusoids, gives the response to frequencies. The Laplace and z transforms (H(s) or H(z)) give transfer functions. – mtrw Sep 21 '11 at 6:23 The relation that you have results from the Wiener-Khinchin theorem (WK). The WK theorem primarily relates the autocorrelation of the input and its power spectral density (PSD) as a Fourier transform pair. I have not heard it referred to by any particular name other than explicitly saying "From the WK theorem, we have blah..." From the article cited: A corollary [of the WK theorem] is that the Fourier transform of the autocorrelation function of the output of an LTI system is equal to the product of the Fourier transform of the autocorrelation function of the input of the system times the squared magnitude of the Fourier transform of the system impulse response. While it was written and proven for signals (or functions) that are square integrable, and hence have a Fourier transform, it is commonly used to study WSS random processes (which do not have a Fourier transform) by relating the autocorrelation via expectations rather than integrals. - 2 It is a fine response, but you really don't answer the question? I get the impression that your answer is Wiener-Khincin theorem, but that is not really true I think. I hope I don't appear grumpy, but the question is really precise so the answer should/could be precise. – niaren Sep 20 '11 at 18:47 1 I disagree with Wikipedia that the result in question is a corollary of the WK theorem. The WK theorem says that the PSD of a WSS process is the Fourier transform of its autocorrelation function. It is a different result entirely that when a WSS process passes through a linear system, the output autocorrelation function is related to the input autocorrelation as $A_Y = h * \tilde{h} * A_X$. This result requires probabilistic analysis and taking expectations etc. that are related to the calculations used to prove the WK theorem, but the result is not a corollary of the WK theorem – Dilip Sarwate Oct 31 '11 at 11:20 1 Continuing my previous comment, once the probabilistic analysis has established that $A_Y = h * \tilde{h} * A_X$, we can apply the WK theorem and say $A_X(t) \leftrightarrow R_X(f)$ and $A_Y(t) \leftrightarrow R_Y(f)$ via WK, while $h(t) \leftrightarrow H(f)$ and $\tilde{h}(t) \leftrightarrow H^*(f)$ and so $$R_Y(f) = |H(f)|^2 R_X(f)$$ via the convolution theorem which is what the OP was asking about. But all this is inapplicable unless you first show that $A_Y = h * \tilde{h} * A_X$, and this is not a corollary of the WK theorem. – Dilip Sarwate Oct 31 '11 at 11:27 @Dilip I don't disagree with that, and I never make the claim that the result for WSS is a corollary of WK. The text I quoted just talks about the relationship between the autocorrelation and Fourier transforms for inputs and outputs of an LTI system. It does not talk about WSS. I clarified just beneath it, that while WK was proven for square integrable signals, it is used to study WSS using a probabilistic approach and relating the autocorrelation via expectations. It's pretty much what you've said here, but I didn't go into any detail, because the OP never asked for it. – Lorem Ipsum Oct 31 '11 at 13:51 @yoda Please note that I said I was disagreeing with what Wikipedia claims, not what you said. The Wikipedia article states that the WK theorem holds for WSS processes, and then claims that the result $R_Y(f)=|H(f)|^2R_X(f)$ is a corollary of the WK theorem which is not correct. The result $A_Y=h∗\tilde{h}∗A_X$ can be proved for deterministic (square-integrable) signals very straightforwardly and then $R_Y(f)=|H(f)|^2R_X(f)$ follows via the convolution theorem. For WSS processes, establishing $A_Y=h∗\tilde{h}∗A_X$ requires probabilistic analysis (as you correctly say). More below – Dilip Sarwate Oct 31 '11 at 14:09 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332117438316345, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/198269/vector-valued-function
# Vector valued function How can I determine if a function lies within the span of a vector valued function? For an example does $\begin{bmatrix} 2\\ 1\end{bmatrix}$ lie in the span of: $$a(x) = \begin{bmatrix} 1\\ x\end{bmatrix}$$ $$b(x) = \begin{bmatrix} x\\ 1\end{bmatrix}$$ $$c(x) = \begin{bmatrix} x\\ 2x\end{bmatrix}$$ - ## 2 Answers You could approach this problem in an unorganized brute force sort of way like: Suppose $$[2\;1]^T = r[1\;x]^T+s[x\;1]^T+t[x\;2x]^T.$$ This leads to equations: $2=r+sx+tx$ and $1=rx+s+t2x$. Thus $2=r+(s+t)x$ and $1=s+(r+2t)x$. So (equating coefficients) one has $2=r$, $0=s+t$, $1=s$, and $0=r+2t$. Therefore, $r=2$, $s=1$, and $t=-1$. Thus "Yes" $[2\;1]^T=2a(x)+b(x)-c(x)$. Instead a coordinate approach is a bit more organized (although a little clunkier). Such an approach would be more successful (than brute force) if dealing with a larger problem. The collection of all functions $\{f:\mathbb{R}\to\mathbb{R}^{2\times 1}\}$ form an infinite dimensional vector space. But your functions lie in a finite dimensional subspace, for example: $W = \left\{ \begin{bmatrix} ax+b\\cx+d \end{bmatrix} \;:\; a,b,c,d \in \mathbb{R} \right\}$. By choosing a basis like $$\beta=\left\{\begin{bmatrix} 1\\0 \end{bmatrix},\begin{bmatrix} x\\0 \end{bmatrix},\begin{bmatrix} 0\\1 \end{bmatrix},\begin{bmatrix} 0\\x \end{bmatrix}\right\}$$ we can write everything in coordinates. So $$\left[ \begin{bmatrix} 2\\1 \end{bmatrix} \right]_\beta = \begin{bmatrix} 2 \\ 0 \\ 1 \\ 0 \end{bmatrix} \quad \left[ \begin{bmatrix} 1\\x \end{bmatrix} \right]_\beta = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix} \quad \left[ \begin{bmatrix} x\\1 \end{bmatrix} \right]_\beta = \begin{bmatrix} 0 \\ 1 \\ 1 \\ 0 \end{bmatrix} \quad \left[ \begin{bmatrix} x\\2x \end{bmatrix} \right]_\beta = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 2 \end{bmatrix}$$. Then asking if $[2\;1]^T$ is in the span of $\{a(x),b(x),c(x)\}$ amounts to asking if the final column of the following matrix lies in the span of the first three columns (i.e. is the final column a non-pivot column?): $$\begin{bmatrix} 1 & 0 & 0 & : & 2 \\ 0 & 1 & 1 & : & 0 \\ 0 & 1 & 0 & : & 1 \\ 1 & 0 & 2 & : & 0 \end{bmatrix}$$ This matrix row reduces to $$\begin{bmatrix} 1 & 0 & 0 & : & 2 \\ 0 & 1 & 0 & : & 1 \\ 0 & 0 & 1 & : & -1 \\ 0 & 0 & 0 & : & 0 \end{bmatrix}$$ So the final column is not a pivot column and thus $[1\;2]^T$ lies in the span of $\{a(x),b(x),c(x)\}$. Moreover, the RREF tells us that the desired linear combination is $2a(x)+b(x)-c(x)$ (as before). - Wow excellent, thank you very much!! – diimension Sep 19 '12 at 6:37 I'm glad it helped. :) – Bill Cook Sep 19 '12 at 10:55 Does the point $(2,1)$ fall on the line parametrized by $a(x),b(x)$ or $c(x)$ if any? $$a(x) = (2,1)=(1,x) \ \ \Rightarrow \ \ 2=1, 1=x$$ $$b(x) = (2,1)=(x,1) \ \ \Rightarrow \ \ 2=x, 1=1$$ $$c(x) = (2,1)=(x,2x) \ \ \Rightarrow \ \ 2=x, 1=2x$$ Obviously $2=1$ is hard to solve, the equation for $b(x)$ shows $b(2)=(2,1)$ and the equation for $c(x)$ gives $x=2$ and $x=1/2$ which is a contradiction. Only $b$ contains $(2,1)$ in its image. Following Bill's suggestion: you could find $c_1,c_2,c_3$ such that $$(2,1) = c_1(1,x)+c_2(x,1)+c_3(x,2x)$$ This equation has to hold for all $x$ if $(2,1)$ is in the span of $a,b,c$ as vector-valued functions. - 1 I think he is asking how to tell if the constant function $f(x)=[2\;1]^T$ is in the span of $\{a(x),b(x),c(x)\}$. – Bill Cook Sep 18 '12 at 1:32 well, that would be more interesting. – James S. Cook Sep 18 '12 at 1:39 Lol Bill is right but no worries. Thanks anyways! – diimension Sep 19 '12 at 6:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9121660590171814, "perplexity_flag": "head"}
http://mathoverflow.net/questions/85593/decompose-tensor-product-of-type-g-2-lie-algebras/86137
## Decompose tensor product of type $G_2$ Lie algebras. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be a semisimple Lie algebra over $\mathbb{C}$. Let $V(\lambda)$ be the irreducible highest weight module for $G$ with highest weight $\lambda$. If $G$ is of type A, we can decompose $V(\lambda)\otimes V(\mu)$ using Littlewood-Richardson rule. In other types, if $\lambda$ and $\mu$ are given explicitly, we can use Weyl character formula to compute the decomposition. My question is can we have some rules similar as Littlewood-Richardson rule for other types (especially for type $G_2$)? Does Littelmann's path model work for this? - ## 4 Answers The search term you want to look for is "Klimyk's Formula." This formula boils down to the following: Fix $G$ a compact complex semisimple Lie group. Suppose $V(\lambda)$ and $V(\mu)$ are irreducible representations with highest weights $\lambda$ and $\mu$ respectively. Let $W_\lambda = {\lambda_1,\lambda_2,\ldots \lambda_d}$ be the multiset of weights of $V(\lambda)$ with $d = dim(V(\lambda))$. Then the irreducible components of $V(\lambda)\otimes V(\mu)$ are given by ${V(\mu+\lambda_i)}_{i=1}^d$. To apply this in practice, you need to be comfortable with the concept of defining $V(\lambda)$ when $\lambda$ is not a dominant weight (which sometimes causes modules to cancel when they appear with both positive and negative signs in the sum), but it applies to lots of groups (even beyond the scope of compact complex semisimple in some cases if im not mistaken), and Littlewood-Richardson is just the special case of this formula in type $A$. An example for $G_2$ (since that is also my favorite compact semisimple Lie group) is to let $\lambda = [1,0]$ be the highest weight of the 7-dimensional representation and $\mu = [0,1]$ the highest weight of the 14-dimensional adjoint representation. The seven weights of $V(\lambda)$ are $[1,0]$, $[-1,1]$, $[2,-1]$, $[0,0]$, $[-2,1]$, $[1,-1]$, and $[-1,0]$ so Klimyk tells us the 98-dimensional tensor product decomposes as: $V([1,1]) \oplus V([-1,2]) \oplus V([2,0]) \oplus V([0,1]) \oplus V([-2,2]) \oplus V([1,0]) \oplus V([-1,1])$ This is where familiarity with interpreting modules with non-dominant highest weights comes in; $V[-1,2]$ and $V[-1,1]$ turn out to be 0-dimensional modules, while $V([-2,2]) \cong -V([0,1])$*. Thus the terms which do not disappear are $V([1,1])$ which is a 64-dimensional module, $V([2,0])$ which is a 27-dimensional module, and $V([1,0])$ which is the 7-dimensional defining representation, a total of 98 dimensions. If you had instead chosen to switch $\lambda$ and $\mu$ and add the 14 weights of $V([0,1])$ to [1,0], you would have obtained 14 modules, but as before, some would have been zero and others would have cancelled in pairs ultimately leading to the same three modules as above being the only things left over. In my opinion, this reflexivity always holding is the coolest thing about Klimyk's formula. One neat corollary to Klimyk's formula is that a tensor product of two irreducible modules cannot decompose into a sum of more than $d$ irreducibles where $d$ is the minimum of the dimensions of the two modules. *EDIT: After posting, I decided to add a bit more about modules with non-dominant highest weight. Basically, the weights of a group $G$ are permuted via the Weyl group action on the weights. Weights are determined by integer $r$-tuples where $r$ is the rank of $G$; tuples containing a -1 lie in the walls of the Weyl chambers and so the modules with these highest weights end up being 0. There are a few other subspaces which also correspond to walls; weights $\mu$ not lying in the walls satisfy $w(\mu) = \lambda$ for some dominant weight $\lambda$ (all coordinates nonnegative) and a unique $w$ in the Weyl group (i.e. only one $w$ in the Weyl group will take $\mu$ to a dominant weight, so $\lambda$ is also uniquely determined). Then $V(\mu)$ is defined by the following relationship: $V(\mu) = (-1)^w\cdot V(\lambda)$ Here $(-1)^w$ is the sign representation which appears with all Weyl groups; in the $A$-series whose Weyl groups are the $S_n$'s this is the ordinary sign representation. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Since the decomposition of tensor products has a complicated history, it's worth adding some comments to ARupinski's answer. 1) Though Weyl's character formula is fundamental for finite dimensional representation theory, it doesn't lead immediately to a method for decomposing tensor products. However, Steinberg derived in 1961 such an elegant method (which however involves an impractical double summation over the Weyl group) here. 2) As early as 1937, Brauer wrote down the explicit recipe referred to here (and often in the applied literature directed toward physicists) as Klimyk's rule: this was a short article in C.R. Acad. Sci. Paris 204, more easily located in Brauer's collected papers. The selling point of this rule is that it requires only the knowledge of one highest weight along with the full character of the second module (given for example by Weyl or by Kostant's equivalent later method). These matters are discussed in Section 24 of my 1972 Springer graduate text, where a proof of Brauer's formula is formulated in the exercises. 3) Klimyk's early work appeared around 1967 in Russian, with an English translation soon after in an AMS volume: MR0237712 (38 #5993) 22.60, Klimyk, A.U. Multiplicities of weights of representations and multiplicities of representations of semisimple Lie algebras. (Russian) Dokl. Akad. Nauk SSSR 177 (1967) 1001–1004. 4) It's important to realize that Klimyk's work has involved not just tensor products of finite dimensional representations, but also tensor products in which one factor is allowed to be infinite dimensional of various special types. This is part of a much broader program for Lie group representations involving Kostant and others. 5) From a purely computational point of view, getting explicit results for type `$G_2$` is not at all easy because dimensions grow so fast. There used to be some explicit printed tables, which always stop when the going gets tough. The older methods of Brauer and Klimyk are anyway inherently inefficient, requiring a huge amount of cancellation as indicated by ARupinski. Combinatorists have found Littelmann's approach (generalizing Littlewood-Richardson for type A) more appealing, but I'm unaware of literature illustrating the method explicitly for type `$G_2$`. 6) It's also worth mentioning that interesting special features of tensor products have been studied using algebraic geometry by Kumar and others in response to the old "PRV Conjecture", but here the concern is about the occurrence of specific summands in a tensor product decomposition and not the entire picture. (To get some perspective on the range of "geometric" literature about tensor product decompositions, take a look at Kumar's 2010 ICM report. This is on his homepage but apparently not on arXiv: http://math.unc.edu/Faculty/kumar/papers/kumar60.pdf) - Interesting history of the formulas Jim. In fact I think it almost certainly was in your book that I first came across the formula when I started needing to calculate decompositions for my dissertation work. – ARupinski Jan 19 2012 at 23:43 The short answer is Yes. Here are some of the details. You start with a crystal for the representation. You don't need to specify which model you are taking. When you take a tensor product a subset of the vertices of the crystal give highest weight vectors. The rule is a simple rule depending on the depth (or rise) of the vertex. For the fundamental representation this is particularly straightforward. If you want to get started with crystals, a good place to begin is: MR1881971 (2002m:17012) Hong, Jin ; Kang, Seok-Jin . Introduction to quantum groups and crystal bases. Graduate Studies in Mathematics, 42. American Mathematical Society, Providence, RI, 2002. xviii+307 pp. ISBN: 0-8218-2874-6 This does not discuss the tensor product rule you asked for. The original reference for for the general tensor product rule is: MR0225936 (37 #1526) Parthasarathy, K. R. ; Ranga Rao, R. ; Varadarajan, V. S. Representations of complex semi-simple Lie groups and Lie algebras. Ann. of Math. (2) 85 1967 383--429. This tensor product rule requires further notation. The advantage is that multiplicities are calculated directly without any cancelation. - @Bruce, thank you very much. I don't know much about crystals for a representation. Could you give some references about this? – Jianrong Li Jan 13 2012 at 17:19 I think there is. Actually there is a version of Weyl's theory for $G_2$, i.e. all finite dimensional irreducible representations are given by Schur functors and hence are intimately related to representations of symmetric groups. See article by Jing-Song Huang for details. - 1 @r0b0t, it seems that this paper is only for the decomposition of $V(\omega_1)^{\otimes n}$ – Jianrong Li Jan 13 2012 at 20:11 1 My reasoning is that Littlewood-Richardson rule can be viewed as a statement about decompositions of Schur functions. These are related to representations of $\mathfrak{sl}$ via Schur-Weyl duality. Since the article by Huang proves this kind of duality for $G_2$ I expect that some kind of LW rule can be inferred from this. I suggest you look first at the case of $\mathfrak{so}$, where the contractions are surely more familiar operations. I think that quite a good reference is the book by Goodman and Wallach. – robot Jan 13 2012 at 22:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344393610954285, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/161725-evaluate-line-integral.html
Thread: 1. Evaluate Line Integral Q: Evaluate $\int_{\gamma} y^2 e^{2\sin{x}} \cos{x} dx + ye^{2\sin{x}} dy$ where $\gamma$ is the arc of the curve $y = \cos{x}$ from the point $(0,1)$ to the point $(\frac{\pi}{2},0)$. A: We know that the integral is exact, meaning that we can choose any path from $(0,1)$ to $(\frac{\pi}{2},0)$. In the solutions they have gone through the origin, along the coordinate axes. So the path taken is from $(0,1)$ to $(0,0)$, and then to $(\frac{\pi}{2},0)$ The answers then state that: The integral is then $\int^0_1 ye^{0} dy + \int^{\frac{\pi}{2}}_0 0 dx$ Not sure how they get this? It looks like they've just subbed $x=0$ and $y=\frac{\pi}{2}$ into the first and second bit of the original integral? I know it's going to be something obvious but I'd appreciate it if someone could help me out with this. Thanks in advance Craig 2. Originally Posted by craig Q: Evaluate $\int_{\gamma} y^2 e^{2\sin{x}} \cos{x} dx + ye^{2\sin{x}} dy$ where $\gamma$ is the arc of the curve $y = \cos{x}$ from the point $(0,1)$ to the point $(\frac{\pi}{2},0)$. A: We know that the integral is exact, meaning that we can choose any path from $(0,1)$ to $(\frac{\pi}{2},0)$. In the solutions they have gone through the origin, along the coordinate axes. So the path taken is from $(0,1)$ to $(0,0)$, and then to $(\frac{\pi}{2},0)$ The answers then state that: The integral is then $\int^0_1 ye^{0} dy + \int^{\frac{\pi}{2}}_0 0 dx$ Not sure how they get this? It looks like they've just subbed $x=0$ and $y=\frac{\pi}{2}$ into the first and second bit of the original integral? On the line from (0, 1) to (0, 0), x is identically 0. And on the line from (0, 0) to $(\pi/2, 0)$, y is identically 0. You could, for example, take as parametric equations for (0, 1) to (0, 0), x= 0, y= t with t from 1 to 0 and for (0, 0) to $(\pi/2, 0)$, x= t, y= 0 for t from 0 to $\pi/2$. I know it's going to be something obvious but I'd appreciate it if someone could help me out with this. Thanks in advance Craig I understand most of what you're saying, when you sub in y = 0 then that's where you get the 0 from etc. I'm just confused how you get: $\int^0_1 ye^{0} dy$ ? What happens to the $y^2$ from the original integral? Thanks again 4. Originally Posted by craig I understand most of what you're saying, when you sub in y = 0 then that's where you get the 0 from etc. I'm just confused how you get: $\int^0_1 ye^{0} dy$ ? What happens to the $y^2$ from the original integral? Thanks again On the first leg, from (0, 1) to (0, 0) x is identically 0 so x does not change. That means that dx is also 0. More specifically, if we let x= 0, y= t, then dx= 0dt and dy= dt. $\int y^2e^{2sin(x)}dx+ ye^{2sin(x)}dy$ becomes $\int t^2e^0(0dt)+ te^0(dt)= \int t dt$ 5. Of course! Thankyou for that, always amazes me how simple most things are when you know the answer...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 35, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9607053995132446, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/210769-hausdorff-spaces-convergence-print.html
# Hausdorff spaces and convergence Printable View • January 4th 2013, 12:57 PM Impo Hausdorff spaces and convergence Hi Everyone, I'm new here and I'm dealing with some topology questions. Suppose that (X,T) is a C1 space then the following statements are equivalent: (1) (X,T) is a hausdorff space (2) Every convergence sequence has a unique limit (3) (X,T) is a T1-space and every compact subset is closed. I know this is not a homework service but honestly I have no idea where to start. In fact, in topology I always described convergence using filters and in this question I have to prove something about sequences so I'm confused. Can someone give me a hint? Thanks in advance! Cheers Impo • January 4th 2013, 01:03 PM Plato Re: Hausdorff spaces and convergence Please tell us what "Suppose that (X,T) is a C1 space" means. C1 is not a common notation in topology. Does it mean first countable? • January 4th 2013, 01:17 PM Impo Re: Hausdorff spaces and convergence Quote: Originally Posted by Plato Please tell us what "Suppose that (X,T) is a C1 space" means. C1 is not a common notation in topology. Does it mean first countable? Indeed. • January 4th 2013, 01:53 PM Plato Re: Hausdorff spaces and convergence Quote: Originally Posted by Impo Suppose that (X,T) is a C1 space then the following statements are equivalent: (1) (X,T) is a hausdorff space (2) Every convergence sequence has a unique limit (3) (X,T) is a T1-space and every compact subset is closed. I know this is not a homework service Note that a Hausdorff space is a $T_2$ space. Note the capital H. If $l$ is the limit of $(x_n)$ then any open set $l\in O$ mean all but a finite collect $\{x_1,x_,\cdots,x_n\}$ most belong to $O$. If you have two limits then in a Hausdorff there are disjoint open sets that separate those limits. That is impossible. Now you post some work. • January 4th 2013, 02:38 PM Impo Re: Hausdorff spaces and convergence $(2)=>(3)$ If I'm allowed to use the fact that $X$ is Hausdorff then (3) is proved immediately, but I don't think I'm allowed to. $(3)=>(1)$ Every $T_1$ space is a $T_0$ space, if I can show that $X$ is regular then we're done because every $T_3$ space is Hausdorff. Let $A \subset X$ be a compact subset and suppose $x \notin A$ then there can be found disjoint open neigbourhouds for $A$ and $x$, but by (3) every compact subspace is closed thus $A$ is closed therefore $X$ is regular. • January 4th 2013, 03:03 PM jakncoke Re: Hausdorff spaces and convergence Well you are supposed to prove IFF implications. $(1) \text{if and only if} (2)$ and $(2) \text{if and only if} (3)$ proving these two things is akin to showing all three statements are equivalent. Plato already gave you $(1) \to (2)$ now prove $(2) \to (1)$ • January 4th 2013, 06:26 PM jakncoke Re: Hausdorff spaces and convergence So let me give you some help for $(2) \to (1)$ Now a (X,T) being a C1 space means that every point has a countable neighborhood basis. (Meaning a sequence of open neighborhoods such that for any open neighborhood U of a, there exists an element in the sequence which is contained in U. Now if a, b were points in the topology, such that $a \not = b$. Since C1, a has a countable neighborhood basis, call it $\bar{A} = \{ N^{a}_i | i \in \mathbb{N} \}$ where $N^{a}_i$ is an open neighborhood of a. So for any open neighborhood U of a, there exists an $i \in \mathbb{N}$ such that $N^{a}_i \subset U$. Now since the intersection of any finite number of open sets is open. $N_1 \cap N_2 \cap ... \cap N_i \subset N_i \subset U$. For any $i \in \mathbb{N}$ let $O^{a}_i = N^{a}_1 \cap ... \cap N^{a}_i$ Note that $O^{a}_j \subset N^{a}_i$ for all $j \geq i$. Note that since $O^{a}_j \subset N^{a}_i \subset U$ for $j \geq i$. This should remind of you the definition for topological convergence. That is, a sequence $x_n \to a$ if for any open neighborhood of a, we can find a $j \in \mathbb{N}$ such that $p \geq j$ implies $x_p \in U$. Now just as we did for a, for b take $\bar{B} = \{N^{b}_i | i \in \mathbb{N} \}$ (its countable basis) and $O^{b}_i = N^{b}_1 \cap ... \cap N^{b}_i | i \in \mathbb{N}$. Now for any natural number i, $O^{a}_i \cap O^{b}_i$ is an open set (containing a, and containing b), since it is the finite intersection of 2 open sets. Now if we assume a and b have no disjoint open sets containing a and b. Then for no $i \in \mathbb{N}$ is $O^{a}_i \cap O^{b}_i = \{ \phi \}$ So if we define a sequence $x_i$ where $x_i \in O^{a}_i \cap O^{b}_i$ for $i \in \mathbb{N}$. Now i claim $x_i \to a$. So pick any open neighborhood of a, call it U. since there exists a $j \in \mathbb{N}$ such that $N^{a}_j \subset U$ and since we know $O^{a}_p \subet N^{a}_j \subet U$ for all $p \geq j$. This shows that for any $x_p$ with $p \geq j$ $x_p \in O^{a}_p$ and thus $x_p \in U$. Thus $x_i \to a$. Now since we assumed, $x_i \in O^{a}_i \cap O^{b}_i$ for $i \in \mathbb{N}$, then $x_i \in O^{b}_i$. I claim $x_i \to b$. Now again pick any open neighborhood of b, call if U. then there exists a $j \in \mathbb{N}$ such that $N^{b}_j \subset U$ which is then $O^{b}_p \subset N^{b}_j \subet U$ for all $p \geq j$. so this shows that for all $p \geq j$ $x_p \in O^{b}_p \subset U$, thus $x_i \to b$. Since we assumed every convergent sequence has a unique limit, $a = b$ which is a contradiction. Thus there be an open set containing a which is disjoint with an open set containing b. Thus hausdrauff. • January 5th 2013, 04:13 AM Impo Re: Hausdorff spaces and convergence Hi jakncoke, thanks for your answer. I wanted to prove the statements are equivalent by proving (1)=>(2)=>(3)=>(1). I'm only missing (2)=>(3) for now. • January 5th 2013, 06:13 AM Plato Re: Hausdorff spaces and convergence Quote: Originally Posted by Impo I wanted to prove the statements are equivalent by proving (1)=>(2)=>(3)=>(1). I'm only missing (2)=>(3) for now. I commend you for wanting to show equivalence that way. But as you have already noted: Quote: Originally Posted by Impo $(2)=>(3)$ If I'm allowed to use the fact that $X$ is Hausdorff then (3) is proved immediately, but I don't think I'm allowed to. I think that I would use some sort of argument such as jakncoke used in reply #7 to show the space is Hausdorff. Then as you note it is an easy to precede. All times are GMT -8. The time now is 04:23 AM. Copyright © 2005-2013 Math Help Forum. All rights reserved. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 72, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457070827484131, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4033953
Physics Forums ## noise term η(t) in Langevin equation Langevin equation describes the brown motion. But I don't understand the nose term η(t) in the equation. What's the relationship between η(t) and the force proportional to the velocity due to stoke's law? I mean they both belong to the force between the collisions with the molecules of the fluid. So η(t) is just the stochastic part of the force due to stoke's law? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Science Advisor The Langevin equation is a stochstic differential equation. Physically it applies to the motion of a heavy object (like a pollen in water) which interacts with light particles such that a single collision of the heavy object with a light matter particle (here water molecules) has little impact on it. You need a lot of collisions to make a macroscopically noticable effect. The important point is that this situation has a clear separation of time scales. On the one hand you have the time between two collisions of the medium particles with the heavy object which is pretty short compared to the time scales over which the heavy object moves a noticable macroscopic distance. The microscopic time scale is small compared to this macroscopic time scale. On the average momentum transfer from the heavy particle to the medium is described by friction. Without any external forces you have $$\frac{\mathrm{d}}{\mathrm{d t}} \langle p \rangle=-\gamma \langle p \rangle.$$ This is a usual differential equation for the average momentum of the heavy particle. The Langevin equation takes into account also fluctuations of the force from the many random collisions per macroscopic time step. It reads $$\mathrm{d} p=-\gamma p \mathrm{d} t+\sqrt{\mathrm{d t} D} \eta.$$ Here $\eta$ is a Gaussian-normal distributed uncorrelated random variable with the properties $$\langle{\eta(t)} \rangle=0, \quad \langle{\eta(t) \eta(t')} \rangle=\delta(t-t').$$ You can show that this equation is equivalent to the Focker-Planck equation for the phase-space distribution function. You find more details about the derivation and properties of the Langevin equation in one of my papers on heavy quarks in the Quark Gluon Plasma. There it's derived for the relativistic motion, but the general arguments are the same for relativistic and non-relativistic situations: http://fias.uni-frankfurt.de/~hees/p...gp4-bibtex.pdf Thread Tools | | | | |-----------------------------------------------------------|------------------------------------|---------| | Similar Threads for: noise term η(t) in Langevin equation | | | | Thread | Forum | Replies | | | Differential Equations | 2 | | | Classical Physics | 1 | | | Atomic, Solid State, Comp. Physics | 2 | | | Classical Physics | 0 | | | Advanced Physics Homework | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8967982530593872, "perplexity_flag": "middle"}
http://www.openwetware.org/wiki/Physics307L:People/Josey/Final_Draft
# Physics307L:People/Josey/Final Draft ### From OpenWetWare Steve Koch 21:29, 14 December 2010 (EST):Overall, excellent report! ## Measuring and Predicting Background Radiation Using Poisson Statistics Author: Brian P. Josey Experimentalists: Brian P Josey and Kirstin G G Harriger Junior Lab, Department of Physics & Astronomy, University of New Mexico Albuquerque, NM 87131 [email protected] ## Abstract In this paper, the rate of background radiation was measured with differing time intervals over which the measurements took place. The fact that this data followed the Poisson distribution was demonstrated. After establishing the Poisson distribution, the results of the data were then used to predict the average rate of radiation per second. This estimated value was found to be 30 ± 4 radiations per second. A new set of data was generated from a run representing the rate for radiation per second. This data indicated that the average rate in the laboratory to be 30 ± 5 radiations per second, indicating that the technique is useful to model and measure background radiation. This technique was then further applied to measuring the radiation rate of two gamma-ray sources, Co-60 and Cs-137, determining rates of 560 ± 20 and 1400 ± 40 radiations per second, respectively. These samples also followed a Poisson distribution. ## Introduction The Poisson distribution, first described by Siméon Denis Poisson in 1838 1, is used to describe events that occur in discrete numbers at random and independent intervals, but at a definitive average rate. Examples of this are radiation from nuclei over a range of time, number of births in a maternity ward over time2, or movement of microorganisms 3. Mathematically, the Poisson distribution is given as: $P_{\mu} (\nu) = e^{-\mu} \frac {\mu ^ {\nu}} {\nu !}$ where, • Pμ (ν) - is the probability of ν counts in a definitive interval, and • μ - is mean number of counts in the time interval, This distribution has the unique property that the square root of the average is also the standard deviation 2. Here the Poisson distribution was demonstrated by measuring the background radiation in a standard physics teaching laboratory at 5000 ft elevation. The background radiation being the naturally occurring ambient radiation in the atmosphere. This radiation is the result of several sources, such as cosmic rays, naturally occurring radioactive isotopes, and fallout from nuclear weapons 4. Several sets of data were created by varying the size of the time interval of the radiation, referred to as the "window size." After taking several different sets of data with the window size ranging from 10 ms to 800 ms, the average rate per second was calculated, and then compared to data taken from a set of 1 second time interval measurements. The radiation rates where then measured for two radioactive sources not commonly found in nature. The first was a sample of cobalt-60, which has a half-life of 5.27 years 5. The second was a sample of cesium-137, which has a half-life of 30.1 years 6. Both of these sources are manufactured radioactive isotopes that do not occur naturally and are gamma-ray emitters. ## Methods and Materials Figure 1-Detector This devise is the combined scintillator-PMT used to detect the ration events in this paper. Figure 2-Power Supply This is the power supply used to generate a potential on the scintillator portion of the lab. This potential allowed for it to collect ions generated by the radiation events. For this experiment, a combined scintillator-photomultiplier tube (Tracor Norther Model TN-1222) referred to in this paper as scintillator-PMT, Figure 1 was used to collect data7. To do this, every time the scintillator detected ions generated by radiation, it would fire a beam of ultraviolet light to the PMT. The PMT would then convert this light signal into a single voltage pulse. This voltage would be carried to an internal MCS card in a computer (J and J Workstation, with Intel Celeron CPUs of 2.26GHz and 2.27GHz, and 1.96 GB of RAM), where it would be analyzed by a UCS 30 software (Spectrum Techniques UCS 30 software). This software would then count each signal voltage it received, and create a set of data containing both the time interval the data was collected, and the number of times it received a signal voltage in a specific time interval. When the UCS 30 software generated a set of data, it would be saved into a data file that could be manipulated and processed using MATLAB v. 2009a. In order for the scintillator to detect the radiation, it needed a potential gradient that would pick up the ions created in the radiation event. This potential was supplied by a Spectech Universal Computer Spectrometer power supply, Figure 2, and set to 1200 V throughout the course of the experiment.SJK 21:12, 14 December 2010 (EST) 21:12, 14 December 2010 (EST) Still think you're a little off on the workings of the scintillator-PMT To demonstrate the Poisson distribution, the scintillator-PMT, power supply, and computer were all turned on. The UCS 30 software was then uploaded, and the dwell time the software would measure the number of radiations was set to various lengths of time. Data was collected over a series of different time intervals that were set at 10, 20, 40, 80, 100, 200, 400 and then 800 ms. Each run contained 2064 windows of the given size. All of these runs had the same settings, 1200 V from the power supply, and using the MCS internal configuration, and varied only in their time interval length. The data was then analyzed, see results and discussion below, to demonstrate that background radiation did follow a Poisson distribution. This data was then used to predict the behavior of a similar set of data that occurred for a 1 second window size. After predicting its behavior, the system was ran again, using a 2064 intervals of 1 second length. This data was then compared to the predicted results form the initial data. After the background radiation rate was measured, the radiation rates for two gamma-ray emitters, Co-60 and Cs-137, was measured. To do this, the background radiation was again measured in the way described above to account for any changes due to the elapsed time between these two parts of the experiment. However, for this measurement, data was collected for only 10, 20, 100, and 400 ms for the time interval. Then a cobalt-60 gamma-ray source (1.057 uCi, Isotope Products Laboratories, lot # 519-2-2, produced 20 Jan 96) was placed flat at the PMT side of the scintillator-PMT, approximately half an inch away. The radiation rates was then measured using window sizes of 20, 40, 100 and 400 ms. From this, the data was shown to follow a Poisson distribution, and the average radiation rate per second was calculated. This procedure was then repeated for a similar sample of cesium-173 (1.073 uCi, Isotope Products Laboratories, lot # 519-3-1, produced 20 Jan 96), illustrating the Poisson distribution in the process, and determining the average radiation rate per second. ## Results Note: all of the raw data for this report, and some of the MATLAB code, can be accessed here. Warning, the page takes a while to load. SJK 21:14, 14 December 2010 (EST) 21:14, 14 December 2010 (EST) Excellent open data! (Matlab code seems to be missing) From the scintillator-PMT and the UCS 30 software, the data was processed using Google Docs to determine the range of the number of radiation events per window, and the number of windows with the given number of radiation events. A fractional distribution of the data was also determined. This data is summarized in Table 1 below: Table 1 This table summarizes the raw data from each of the initial experimental runs. For each window size, the whole range of number of radiation events per window is given in the first column. In the second column, the number of windows that had this given number of radiation events is given. For example, for the 10 ms run, only a single window had four radiation events in it. The third and final column gives the fractional probability, out of 1, of such event happening in the given run. Figure 3-Probability per Window In this figure, the number of radiations per window, horizontal axis, is plotted against the likelihood of that number of events happening, vertical axis. The four sets of data that are plotted are: blue - 10 ms, red - 80 ms, black - 200 ms and green - 800 ms. Where the time denotes the window size. From this data, a graph was generated to illustrate the characteristics of the data. This graph is given in Figure 3. For the sake of simplicity only four of the nine data runs are graphed, however, even from this small sample of data, the trends across the whole of the sample are still clear. While the exact implications of this data is discussed in the discussion section below, the data makes it clear that as the window sizes increases, the most probable number of radiations per window increases, and the distribution of the probabilities spreads. This spreading in the data is a result of a greater standard error. Together these two trends are qualitative reasons to believe that the data follows a Poisson distribution. This is further discussed bellow in the discussion section2. A more potent argument for the Poisson distribution is to determine the averages and standard deviations for each set of data. This data is summarized in Table 2 below: Table 2 This table summarizes the raw data collected from the runs with varying window sizes, from 10 ms to 800 ms. The first column represents the average radiation event per window as calculated directly from the data. The second and third rows represent the standard deviation of the data as the square root of the average, and as directly calculated from the data. The fourth row represents the percent error of the standard deviation from the data from the square root standard deviation. Because of the very low difference, it is clear that the data follows a Poisson distribution. The last two rows are the average and standard deviation converted from window size to rate per second. As this table illustrates, the average radiation event per window sizes grows as the window sizes increases, and the standard deviation in proportion to the average also increases. There are two standard deviations in the table. The first is the square root of the average, while the second is directly calculated from the data. The reason for this, discussed in greater detail below, is that the standard deviation of a Poisson distribution is identical to the square root of the average. As the percent error between the two values shows, the data does follow this trend very closely, and the small differences, that never exceed 1.5 %, indicate that this is true. Figure 4- Radiations per Second This graph represents the average number of radiations per second and their standard deviations as measured in the report. For simplicity, the horizontal axis is given in indexices so that 1 through 8 gives window size 10 ms to 800 ms in increasing size. The averages and standard deviations from the data were then divided by the window size to give the rate and error as number of radiations per second. These values are then graphed in Figure 4. Clearly the standard deviation decreases as the window size approaches 1 second, indicating the greater accuracy in the data. From this data, the average rate and standard deviation for a windows size of 1 second was calculated. This value, xwav, was calculated using weighted averages2: $x_{wav} = \frac {\sum {w_i x_i}} {\sum {w_i}}$ where • xwav - is the best estimate for the average number of radiations per second, • wi - is the weight of each average, this value is just the inverse of the square of the standard error for a given run, • xi - is the average value from each run The standard deviation is then given as: $\sigma_{wav} = \frac {1} {\sqrt {\sum {w_i}}}$ where again, wi is the weight of each average, and σwav is the best estimate for the standard deviation in the final average. These calculations then give a predicted value of 30 ± 4 radiations per second. Another experiment was ran, this time the window size was raised to 1 second. The data from this experiment is summarized in Table 3 below. It is important to note that the predicted value for the average number of radiations per second is only off by the measured value by 0.725 %. Table 3 This table summarizes the data taken for a window size of 1 second. On the left is the average predicted by the calculations, its square root, and predicted standard deviation. Below this is the actual average, its square root, standard deviation and the percent difference between the prediction and the measured amount. On the left is the range of radiations per minute, the raw number of windows with a given number and their fractional frequency. Figure 5- Cobalt-60 Distribution This figure illustrates the behavior of the radiation from the Co-60 sample. As the dwell time for the measurements increases, the probability distribution shortens and spreads outward, which is characteristic of a Poisson distribution. Figure 6- Cesium-137 Distribution This figure illustrates the distribution of radiation from the Cs-137 sample. Like Figure 5 above, the general behavior of data is characteristic of a Poisson distribution. From the second set of measurements, we were able to determine the average rate of radiation per second for the background in the same way as given above, giving a rate of 24 ± 5 radiations per second. While this value is slightly less than the one from the original experiment, possibly from change of conditions over the weeks between the experiments, the nature of this change is outside of this paper, and is only used as a baseline for our cobalt and cesium measurements. As summarized in Table 4 below, both of the gamma-ray sources exhibited a growth in average rate and standard deviation as the measurement dwell time increased. This is illustrated in Figure 5 and Figure 6 for cobalt and cesium respectively. From the data, it was determined that cobalt-60 had an average rate of 560 ± 20 radiation per second, approximately twenty three times greater than the background. Similarly, the cesium-137 sample had an average radiation rate of 1400 ± 40 radiations per second, nearly sixty times greater than the background. Table 4 This table summarizes the data from second background measurement and the measurements for both cobalt-60 and cesium-137. For each measurement, the average rate per dwell time is given, as well as the square root standard deviation, and experimental standard deviation, which are used to establish the Poisson distribution, through their percent differences. Also, the average rate per second per trial, and per sample is given. For the cesium and cobalt samples, the background radiation has also been subtracted. ## Discussion Qualitatively, the Poisson distribution has a specific trend in its behavior, namely, as the window size over which the counts were taken increases, the average number of events per window increases, and the distribution spreads out over time8. This results in probability graphs moving to the right and flattening in a standard x-y graph as Figure 3 demonstrates. This behavior in the data indicates that a Poisson distribution could be the mostly likely way to model and represent the data. However, there is a much stronger argument for this conclusion. The qualitative behavior in the Poisson distribution is actually the result of the change in the standard deviation from the mean2,8. As Table 2 illustrates, the square root of the mean is close enough to the experimental deviation. In fact, using the theory of large numbers, it can be shown that the standard deviation of a Poisson distribution and the square root of the mean are the same2. The data for the first several experimental runs illustrates that the standard deviation and square root of the mean never differ by more than 1.3 %, indicating a very close relationship. This relationship is so close that it is clear that the background radiation in the laboratory can be modeled by a Poisson distribution. Also, this means that as the mean number of radiations per window increases, the standard deviation increases proportional to the square root of the mean, giving the spreading behavior in Figure 3. Armed with this knowledge of the behavior of the background radiation, the means and standard deviations for the experimental sets between 10 ms and 800 ms were used to determine the average rate of radiation per second. As discussed in the results section, this value was found by using the weighted average method to account for varying degrees of certainty in the measurements. This generated a value of 30 ± 4 radiations per second, and the experimental test for the one second time window gave a final result of 30 ± 6 radiations per minute. As summarized in Table 3 the difference between the two values was only 0.725 %. Because of this close relationship between the two values, it is clear that the method of applying the Poisson distribution to calculate the average radiation rate is successful. This can then be used for any other window size, where determining the final result is only a function of scale.SJK 21:19, 14 December 2010 (EST) 21:19, 14 December 2010 (EST) I still have a disagreement with this weighted average method and that it way overestimates the uncertainty--see notes on rough draft page. To judge the merits of using this technique to measure the radiation rates of other sources, the nature of a different type of radiation had to be determined. From the data collected for the cobalt and cesium samples, it is clear that they generally follow a Poisson distribution, spreading out and moving to greater values as the dwell time increased. From the data, it is also clear that the square root of the average and the standard deviation of the data never exceeds 3.5 %, quantitatively supporting the Poisson distribution. Armed with this knowledge, the same technique used to determine the rate for the background was used to get the rate for the samples. After subtracting the background radiation rate, this gave a value of 560 ± 10 and 1400 ± 40 radiations per second for cobalt-60 and cesium-137 respectively. The value for cobalt is twenty three time higher than the background, while it is nearly sixty times greater for cesium. Clearly the samples are much more radioactive than the surroundings. ## Conclusions For this paper, the radiation rates were measured for varying time intervals to determine its behavior and find a mathematical representation of it. Because the behavior in the probability distributions followed a pattern where the mean grew as the interval increased and the experimental standard deviation grew as the square root of the mean, it became clear that the Poisson distribution is the best way to model the behavior of the background radiation. The difference between an ideal Poisson distribution and the data was found to never vary more than 1.3 % for all the data. Prompted by this, the average rate of radiation per second was calculated from the data. This gave an average rate of 30 ± 4 radiations per second. These values were checked experimentally, giving a value of 30 ± 6 radiations per second, showing that the model was acceptable and could create accurate predictions. The utility of this models is found for any process that, like background radiation, occurs at random independent times, but a definitive average rate. The methods presented here can be used to predict the behavior of these processes to gain very accurate results. The method could also be used to determine the radiation rates of radioactive samples, and how they compare to the background. By subtracting off the background radiation, it was determined that a small sample of cobalt-60 radiated, on average, 560 ± 24 times a second, approximately twenty three times more often than the background. Also, a sample of cesium-137 was measured to radiate 1400 ± 40 times per second, nearly sixty times more often than the background radiation, and two and a half times more often than the cobalt sample. These measurements illustrate the greater diversity of phenomena that could be described with the Poisson distribution, with great accuracy.SJK 21:22, 14 December 2010 (EST) 21:22, 14 December 2010 (EST) technically, your sources are emitting in all directions, and your detector may catch a wider swath of cosmic rays--so the comparison could be a little trickier. But I love that you set out to use real sources! ## Acknowledgments I want to make a couple of acknowledgements of the people that helped me with this report. As always, I want to thank my fellow experimenter, Kirstin Harriger, for all of her help, and doing the experiment. I also want to thank another student, Dan Wilkinson, for originally suggesting that we looked at a radioactive source, and working with Kirstin and I during "extra data week". The professor for this course, Dr. Steve Koch, and the TA, Katie Richardson were both very helpful as well. Outside of our class, I want to thank the original instructor of this course, Dr. Michael Gold, for writing the lab manual, Dr. Stephen Boyd for helping me with my MATLAB code (as always), and Bill Miller for supplying us with the radiation sources. ## References 1. Poisson, Siméon Denis Recherches sur la probabilité des jugements en matière criminelle et en matière civile (1838) Link here (in French). 2. Taylor, John R. An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements 2e (1997) pg. 174-5 for weighted average 245- 260 for Poisson, Amazon link 3. Giuffre, C., Hinow, P., Vogal, R., Strickler, J.R. The ciliate Paramecium shows higher motility in non-uniform chemical landscapes' PLoS ONE, 2010 preprint 4. Thorne, M.C. Background radiation: natural and man-made 2003 J. Radiol. Prot. 23 29 link on IOP Science 5. United States Environmental Protection Agency Radiation Protection: Cobalt government website link to EPA webpage 6. National Library of Medicine: Hazardous Substance Database Cesium, Radioactive government website link to toxnet webpage 7. Gold, Michael Physics 307L: Junior Lab Manual (2006) link here 8. Wikipedia Poisson Distribution article form the web, external link
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324554204940796, "perplexity_flag": "middle"}
http://cs.stackexchange.com/questions/7055/how-does-this-algorithm-for-verifying-if-a-string-is-0n1n-work
# How does this algorithm for verifying if a string is $0^n1^n$ work? I have found an efficient algorithm for verifying if a string $\omega$ is of the form $0^n1^n$, where $n \in \mathbb{N}$. 1. Scan across $\omega$. If a 1 appears before a 0, then reject. 2. Repeat so long as some 0s and some 1s remain on the tape. 1. Scan across $\omega$. If the total number of 0s and 1s remaining is odd, reject. 2. Scan across $\omega$. Cross out every other 0 starting with the first 0. 3. Scan across $\omega$. Cross out every other 1 starting with the first 1. 3. If no 0s and 1s remain in $\omega$, accept. Otherwise, reject. I generally see how this algorithm is efficient. It gets rid of half of all 1s and 0s every iteration. How does it work though? Why must we reject if the total number of 0s and 1s remaining in $\omega$ is odd? - Short version: if the total length of the string is odd then there must be different numbers of 0s and 1s left. As long as the string started with the same number of 0s and 1s, then steps 2.2 and 2.3 will remove the same number of 0s and 1s, and thus preserve that property. – Steven Stadnicki Nov 30 '12 at 23:23 1 In a slightly more complicated form, this is effectively a search through a (the # of 0s) and b (the # of 1s), checking their low-order bits to confirm that they're the same (this is the 'parity of sum' test), then lopping off the low-order bits (steps 2.2 and 2.3) and continuing. – Steven Stadnicki Nov 30 '12 at 23:24 You give the algorithm, so I don't get what "How does it work?" asks for. Do you mean to ask why it works? – Raphael♦ Dec 1 '12 at 14:00 ## 1 Answer This is a very informal answer (other users will give you a more formal and concise answer :-) Suppose you have two numbers written in binary notation (the smaller one left-padded with zeros); for example: ````14 = 1110 6 = 0110 ```` A simple algorithm to check if they are equal is comparing the two binary representations bit by bit; this can be done comparing the lower bit and then dividing the number by two (i.e. dropping the lower bit): ````111[0] = 011[0] OK (14 mod 2 = 6 mod 2 ) 11[1] = 01[1] OK ( 6 mod 2 = 3 mod 2 ) 1[1] = 0[1] OK ( 3 mod 2 = 1 mod 2 ) [1] = [0] NO => 14 and 6 are different ```` You can easily notice that the lower bits of two numbers are the same if the numbers are both even or both odd (i.e. if their sum is even). The algorithm above does exactly the same thing but the numbers are represented in unary: ````00000000000000 111111 even 0s, even 1s, sum even OK -0-0-0-0-0-0-0 -1-1-1 odd 0s, odd 1s, sum even OK ---0---0---0-- ---1-- odd 0s, odd 1s, sum even OK -------0------ ------ odd 0s, even 1s, sum odd REJECT! ```` -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932977020740509, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/43366/tricky-integral/43368
Tricky Integral Can one show that $$\int_{0}^{\infty} \! \frac {1}{1+x^{2}} \frac {x^{a}-x^{b}}{(1+x^{a})(1+x^{b})}~\mathrm{d}x=0 ~~~~~~~~ \forall ~a,b~\in \mathbb{R}.$$ Any hints? - is it $x^{x^{b}}$ – user9413 Jun 5 '11 at 11:06 @Chandru: No, it is just $(1+x^b)$. – night owl Jun 5 '11 at 11:27 @Rasholnikov: Right, but I think it wants it for any general $(a,b) \in \mathbb{R}$. – night owl Jun 5 '11 at 11:29 1 Answer Yes, one can. Here are some hints, which should be expanded before being called a proof. Writing $x^a-x^b$ as $(x^a+1)-(x^b+1)$ and simplifying the fraction, one sees that it is enough to show that $I(a)$ does not depend on $a$, with $$I(a)=\int_0^{+\infty}\frac{\mathrm{d}x}{(1+x^2)(1+x^a)}$$ To prove this, one could decompose $I(a)$ as the sum of an integral from $0$ to $1$ and an integral from $1$ to $+\infty$ and use the change of variable $y\leftarrow1/x$ in the latter. One would be left with $$I(a)=\int_0^{1}\frac{\mathrm{d}x}{(1+x^2)(1+x^a)}+\int_0^{1}\frac{y^a\mathrm{d}y}{(1+y^2)(1+y^a)}=\int_0^{1}\frac{\mathrm{d}x}{1+x^2},$$ which is independent on $a$, and this would yield the result. - +1$\ldots~$marvelous. – night owl Jun 5 '11 at 11:34 @night owl: thanks for this excessive but nice comment. – Did Aug 16 '11 at 21:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525505900382996, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/45884/conjugate-variables-noethers-theorem-and-qm/45890
# Conjugate Variables, Noether's Theorem and QM What is the underlying reason that the same pairs of conjugate variables (e.g. energy & time, momentum & position) are related in Noether's theorem (e.g. time symmetry implies energy conservation) and likewise in QM (e.g. $\Delta E \Delta t \ge \hbar$)? - – Qmechanic♦ Dec 4 '12 at 12:05 In QM momentum and coordinate are not "related in Noether's theorem" except for a free motion. – Vladimir Kalitvianski Dec 4 '12 at 13:01 This is an interesting question. – Terry Bollinger Dec 4 '12 at 17:29 ## 2 Answers Good observation. The pairs of variables in Noether's theorem are conjugate momenta. Basically they are generalized version of mechanical momentum. Interesting note is that even without relativity, energy is recognized as the momentum of time. Generalized momenta are extensively used in the Hamiltonian formulation of classical physics. In quantization, basically the Hamiltonian and most related structures are unchanged, so the conjugate pairs remain the same. - This leaves the question why quantization has to make conjugate pairs non-commutative. If you just look at the geometric meaning of conjugate pairs you can avoid that question. – A.O.Tell Dec 4 '12 at 12:32 Were they ever commutative? Also, I don't think that was what the O.P. asked, and probably wasn't in his interests. – namehere Dec 4 '12 at 12:48 What do you mean, "were they ever commutative?"?. The observable algebra on the classical phase space IS commutative. My point is merely that you point to another question, namely why quantization works like that, which is in no way obvious. – A.O.Tell Dec 4 '12 at 13:29 Conjugate pairs? ${x,p} = 1$ holds in classical physics. I didn't think you meant the observable algebra. Sorry about that. However, I think this is (mostly)irrelevant to the current matter so I omitted it. – namehere Dec 4 '12 at 14:33 Oops. Just noticed. Should be $\{x,p\}=1$. Bad formatting. – namehere Dec 4 '12 at 17:26 Both commutation and conservation work in the context of geometry, specifically the generation of transformations. It is important to understand that conjugate pairs like position and momentum relate as generators of translations. If these translations leave the system unchanged, i.e. you have a symmetry generated by the conjugate momentum, then the conjugate momentum must be conserved. In quantum theory the conjugate pairs are not independent. This is because on a hilbert space the generators of translations and the coordinate they act on relate like the derivative and the coordinate, which don't commute. So it really comes down to the geometry of the phase space of a system. In classical physics the phase space is just a normal manifold with the usual geometric structure. But in quantum theory the construction using translations on the hilbert space result in a non-commutative geometry. There is no way to avoid this. - Conservation means commutativity with the Hamiltonian, not with a conjugated variable. – Vladimir Kalitvianski Dec 4 '12 at 13:03 I never claimed anything else? You must have misunderstood my argument. I never relate conservation to commutativity, instead I relate non-commutativity to unitary representations of translations. The part about conservation is solely about translations being generated by their conjugate momenta. Two different things. – A.O.Tell Dec 4 '12 at 13:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9061441421508789, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/106130/counting-functions-or-asymptotic-densities-for-subsets-of-k-almost-primes
# Counting Functions or Asymptotic Densities for Subsets of k-almost Primes This question is an extension of this question. There the asymptotic density of k-almost primes was asked. By subsets I mean the following: Let $\lambda$ be a partition of $k$ and $P_{\lambda}=\{ \prod p_m^{\lambda_m} \; |\; p_m\neq p_k \}$. So $P_{(1,1)}$ would be all semiprimes, despite squares. What I got are results on $k$-almost primes, being the union of all subsets $P_{\lambda}$. Here are some explicite formulas, like $$\pi_2(n)=\sum_{i=1}^{\pi(n^{1/2})}\left[\pi\left(\frac{n}{p_i}\right)-i+1\right].$$ A general asymptotic is given by $$\begin{eqnarray*} \pi_k(n) &\sim& \left( \frac{n}{\log n} \right) \frac{(\log\log n)^{k-1}}{(k - 1)!}\\ \end{eqnarray*}$$ For the case of $P_{(1,1)}$ we just subtract the number of squares from $\pi_2(n)$ and get $$\pi_{P_{(1,1)}}=\pi_2(n)-\pi(n^{1/2}),$$ but I don't see how to extend this. So again: How do the counting function $\pi_{P_{\lambda}}(n)$ or their asymptotics look like? - I think all the asymptotic weight goes to (1,...,1) since the sum of the squares (or cubes, etc.) of the reciprocals of the primes converge. So the k=2 case seems more general than it appears at first. – Charles Mar 30 '12 at 13:44 – draks ... Mar 30 '12 at 14:36 Consider the 4-almost primes. The density of all the forms is n (log log n)^3/(6 log n). I think that the density of (1,1,1,1) is (log log n)^3/(6 log n) and the density of the others is negligible in comparison. For example, the density of (2,1,1) is certainly less than 3 times the density of (1,1,1) which is at most (log log n)^2/(2 log n). – Charles Mar 30 '12 at 15:01 ## 2 Answers For a complete answer, see this paper: Integers with a predetermined prime factorization. There, the author defines two functions, $\sigma_{\lambda}(x)$ and $\pi_{\lambda}(x)$ where $\lambda$ is some vector in $\mathbb{N}^k$. These count the number of integers up to $x$ of the form $p_1^{\lambda_1}\cdots p_r^{\lambda_r}$ where $\pi_{\lambda}(x)$ has the added condition that $p_i\neq p_j$ when $i\neq j$. The result is general asymptotics for both. Also see this Math Stack Exchange question and answer. Examples: Using your notation above, we have: $$\pi_{(1,1,2)}(x)\sim \sum_p P(2) \frac{x \log \log x}{\log x},$$ where $P(s)=\sum_p \frac{1}{p^s}$ is the prime zeta function. Another example is: $$\pi_{(1,1,2,5)}(x)\sim C_{(1,1,2,5)} \frac{x \log \log x}{\log x},$$ where $$C_{(1,1,2,5)}=P(2)P(5)-P(10)=\sum_{p}\frac{1}{p^2}\sum_{p}\frac{1}{p^5}-\sum_{p}\frac{1}{p^{10}}.$$ - Wow, thanks. I was just scanning the paper, but it exactly looks as what I need. – draks ... May 3 '12 at 21:29 @draks: If you have any questions or suggestions about the paper, feel free to email me. – Eric♦ May 3 '12 at 21:38 @draks: Ahh sorry! I had thought it appears on my profile, but I was wrong. My email is: naslund.eric (at) gmail.com – Eric♦ May 6 '12 at 22:38 I've been asked to submit this partial answer. Asymptotically, almost all k-almost primes are squarefree. This follows from Landau's asymptotic $$\pi_k(x)\sim\tau_k(x)\sim\frac{x}{\log x}\cdot\frac{(\log\log x)^{k-1}}{(k-1)!}$$ which holds both for numbers with $\omega(n)=k$ and for numbers with $\Omega(n)=k.$ First, consider $k$-almost primes which are divisible by the square of some prime. They are $\ell$-almost primes for some $\ell<k$ and so there are at most $\pi_\ell(x)$ such numbers. Repeating this for each of the other $p(k)-1$ classes of $k$-almost primes one gets their total density at most $\left(p(k)-1\right)\pi_{k-1}.$ Hence the number of squarefree divisors must make up the whole asymptotic density. Explicitly, in your notation, $$\pi p_{1,\ldots1}\sim\frac{x}{\log x}\cdot\frac{(\log\log x)^{k-1}}{(k-1)!}$$ where there are $k$ 1s, and $$\pi p_\lambda\ll\frac{x}{\log x}\cdot(\log\log x)^{k-2}$$ if there are any numbers larger than 1 in $\lambda.$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389055967330933, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/48511-elliptic-integral-sqrt-cosx.html
# Thread: 1. ## elliptic integral of SQRT(cosx) hey..we are studying elliptic integrals in my class and I did some additional research online....where i came across the formula for the integral of SQRT(cosx) = 2E (1/2 *x, SQRT(2)). I couldn't figure out how to get there myself...would someone mind writing out the proof of this? 2. If we define the elliptic integral of the second kind as: $\textbf{E}(\phi,k)=\int_{0}^{\phi} \sqrt{1-k^2\sin(\theta)}d\theta$ and we can write (using half-angle formula, then sub $u=\theta/2$): $\int_0^x \sqrt{\cos(\theta)}d\theta=2\int_{0}^{x/2}\sqrt{1-2\sin^2(u)} du$ then: $2\int_{0}^{x/2}\sqrt{1-2\sin^2(u)} du=2\textbf{E}(x/2,\sqrt{2})$ 3. is there a way to write it in the forms of both the first and second kinds of elliptic integrals? (F is the same as E but d(theta) is on the top of a fraction in the integral, with the rest on the bottom of the fraction 4. Originally Posted by minivan15 is there a way to write it in the forms of both the first and second kinds of elliptic integrals? (F is the same as E but d(theta) is on the top of a fraction in the integral, with the rest on the bottom of the fraction I don't see how it can be written in terms of elliptical integrals of the first kind: $\textbf{F}(\phi,k)=\int_0^{\phi}\frac{d\theta}{\sq rt{1-k^2\sin^2(\theta)}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246410727500916, "perplexity_flag": "head"}
http://mathoverflow.net/questions/69411/bound-on-the-real-part-of-roots-of-a-polynomial
## Bound on the real part of roots of a polynomial ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, I'm not sure if you can help me with this, but I'm currently looking for an upper bound on the real part of the roots of a polynomial with real coefficients. In other words, I have a polynomial $a_n x^n + a_{n - 1} x^{n - 1} + ... + a_1 x + a_0 = 0$ where $a_0, a_1, \dots, a_{n}$ are real coefficients. Suppose that the roots are $t_1, t_2, \dots, t_n$. I want to find an bound $t^*$ such that $Re(t_i) \leq t^*$ for $i = 1, \dots, n$. Is there a theorem that provides such a result? I've seen Rouche's theorem, but from what I've read, this only tells me if the root lies in some circle of specified size. I may be missing something but I don't think that it tells me much about the real part of the root. Any help is appreciated. Thank You. - IF you know that the root lies within a circle, you have bounded the root, and imagining the circle in the complex plane, it lies between two vertical lines (tangent to the circle) which provide explicit bounds and which can be determined as the real part of the circle +/- its radius. – Mark Bennet Jul 3 2011 at 20:15 1 That was supposed to be real part of the centre +/- radius – Mark Bennet Jul 3 2011 at 20:17 I can't imagine that knowing that the polynomial has real coefficients will help. – Thierry Zell Jul 3 2011 at 20:46 2 A real polynomial with all its roots in the left half of the complex plane is a Hurwitz stable polynomial. There are characterizations of the these polynomials, in particular the Routh-Hurwitz criterion. Or you can apply a M\"obius transformation to map the half-plane to a disc and apply the Schur-Cohn condition. – Chris Godsil Jul 3 2011 at 21:02 1 @Mark, @Chris, thank you for your comments. I should have mentioned this before, but the polynomial is convex and increasing on the right half plane and so has one positive real root. I do have much knowledge of the roots of polynomials, but does this imply that the real part of the complex roots lie in the left half plane (or is it still possible that the real part of these roots can be positive)? – unknown (yahoo) Jul 3 2011 at 21:35 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419431686401367, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/2774/rotate-a-long-bar-in-space-and-get-close-to-or-even-beyond-the-speed-of-light
# Rotate a long bar in space and get close to (or even beyond) the speed of light $c$ Imagine a bar spinning like a helicopter propeller, At $\omega$ rad/s because the extremes of the bar goes at speed $$V = \omega * r$$ then we can reach near $c$ (speed of light) applying some finite amount of energy just doing $$\omega = V / r$$ The bar should be long, low density, strong to minimize the amount of energy needed For example a $2000\,\mathrm{m}$ bar $$\omega = 300 000 \frac{\mathrm{rad}}{\mathrm{s}} = 2864789\,\mathrm{rpm}$$ (a dental drill can commonly rotate at $400000\,\mathrm{rpm}$) $V$ (with dental drill) = 14% of speed of light. Then I say this experiment can be really made and bar extremes could approach $c$. What do you say? EDIT: Our planet is orbiting at sun and it's orbiting milky way, and who knows what else, then any Earth point have a speed of 500 km/s or more agains CMB. I wonder if we are orbiting something at that speed then there would be detectable relativist effect in different direction of measurements, simply extending a long bar or any directional mass in different galactic directions we should measure mass change due to relativity, simply because $V = \omega * r$ What do you think? - 4 `V(with dental drill) = 14% of speed of light` :-o You scared me with this. I actually had to calculate it to understand what you meant. =P – Bruce Connor Jan 14 '11 at 0:29 1 haha now that you said it, I see it's a funny statement – Hernan Eche Jan 14 '11 at 12:08 2 "... simply extending a long bar or any directional mass in different galactic directions we should measure mass change due to relativity". Well Michelson-Morley thought of something similar. The answer is no. We won't detect change in weight or size. By principle of relativity. Because we travel with the same speeds as the bar. – Andrei Apr 28 '11 at 21:02 ## 7 Answers Imagine a rock on a rope. As you rotate the rope faster and faster, you need to pull stronger and stronger to provide centripetal force that keeps the stone on the orbit. The increasing tension in the rope would eventually break the it. The very same thing would happen with bar (just replace the rock with the bar's center of mass). And naturally, all of this would happen at speeds far below the speed of light. Even if you imagined that there exists a material that could sustain the tension at relativistic speeds you'd need to take into account that signal can't travel faster than at the speed of light. This means that the bar can't be rigid. It would bend and the far end would trail around. So it's hard to even talk about rotation at these speeds. One thing that is certain is that strange things would happen. But to describe this fully you'd need a relativistic model of solid matter. People often propose arguments similar to yours to show Special Relativity fails. In reality what fails is our intuition about materials, which is completely classical. - I didn't understand, you say the bar will break? or that think about the bar will break is our intuition about materials? I want to know what will happend and what is the physic restriction of doing the experiment (of course I suposse something must be wrong, but I asked to know it deeper) thanks – Hernan Eche Jan 13 '11 at 15:06 3 @Hernan: both. The bar is not rigid. It will first bend because the signal has to travel from one end to the other; the end closer to you is already moving but the far end won't start moving until the stress wave reaches it. Also, there will be huge stress on the material in the radial direction which will eventually break it. – Marek Jan 13 '11 at 17:01 so perhaps relativity limit the posible size of things.. because if something have too much lenght it divides at first rotation because some points behave massy and breaks – Hernan Eche Jan 13 '11 at 20:00 @Hernan: actually, relativity limits the angular velocity you can get an object up to. – David Zaslavsky♦ Jan 13 '11 at 22:26 3 @Hernan: but remember that you'd need lot of energy to rotate that object in the first place! E.g. you'd need an infinite amount of energy to accelerate a particle to the speed of light. And that's just a particle, not a huge extended object. Second point is that in nature there are no solid objects on macroscopic scales. On the scales of galaxies there is just intergalactic dust and stars following the rules of General Relativity. – Marek Jan 13 '11 at 22:33 There is a real object with relativistic speed of surface - millisecond pulsar. The swiftest spinning pulsar currently known, spinning 716 times a second. Surface speed of such pulsar with radius 16 km is about $7*10^7$ m/s or 24% speed of light. It is calculated that pulsars would break apart if they spun at a rate of more than 1500 rotations per second. - great to know ! – Hernan Eche Jan 13 '11 at 20:03 In your calculations you assume that your propeller is a rigid body. You cannot use that assumption, when your speeds are not much smaller than the speed of light. Because "there are no rigid bodies in relativity". - MMmm this let me thinking..So..that could lead a definition of body, i.e. body can only exist in one and only one inertial frame of reference, never in two at same time – Hernan Eche Jan 13 '11 at 19:56 remember that in a three-dimensional description of special relativity the impulse of an object is given by $$\mathbf{p} = \gamma m \mathbf{v}$$ with the so-called Lorentz-factor $$\gamma = \frac{1}{\sqrt{1-v^2/c^2}}$$ Now, do you think you can accelerate the masses within the slab to a speed greater than light or do you think that something is wrong with your physical model of the system? Sincerely Robert - Nice letter, I want to know what is wrong in doing the real experiment, or thinking you can deliver energy (taking your time) and accelerate the bar (gradually) to transform that energy and angular velocity in a higher tangential velocity, angular momentum conservation will store the energy so I can continue adding more and more energy – Hernan Eche Jan 13 '11 at 15:02 Hernan writes "near c" but he did not write about "speed greater than c". – Andrei Apr 28 '11 at 21:03 @Andrei: It remains the same argumentation. Greets – Robert Filter Apr 29 '11 at 7:05 I say no. Assuming all the practicalities work, you can get arbitrarily close to c. But not reach c. You can see this easily from the relativstic formula for kinetic energy: $E_k = mc^2(\frac{1}{\sqrt{1-v^2/c^2}}-1)$ As $v$ approaches $c$, the energy you need to supply to a particle at the end of the bar tends to infinity. - The energy needed would vary in every segment of the bar because its speed would be different, so the bar would break only by relativity effects ? perhaps – Hernan Eche Jan 13 '11 at 15:14 4 bar would break because it" reacts" on force with the speed of sound in it, which is not more than 3000 m/s. When you cross 3000 m/s bar is like a liquid :-) – BarsMonster Jan 13 '11 at 15:47 1 Your assumption of "all the practicalities working" is far too big of an assumption; you're assuming away the very heart of the question, which is: can I do this using a rigid body. – Mark Beadles Dec 18 '11 at 3:41 Dear Hernan, as the distant parts of the bar are approaching the speed of light, they become heavier, so it becomes harder to accelerate them: you can never reach (or surpass) the speed of light. It doesn't matter whether you try to accelerate the "final segments" of the bar by jets or by their attachment to the rest of the bar that is being pushed in the middle: the speed of light can never be reached. If you want to speak in terms of torques and moments of inertia (of the bar), the moment of inertia goes to infinity - much like the mass itself - when the velocity of some points on the bar approaches the speed of light. So much like you have to modify the formulae for masses of moving objects by relativistic effects, you need to modify the formulae for the moments of inertia. So your statement that you need a finite energy to get to the speed of light is invalid. You would need an infinite energy. For speeds below the speed of light, the total energy that you need can simply be calculated as the sum of the kinetic energies of all the segments of the bar. - What if you rotate the bar (wouldnt a sphere be better?) at 50% of c then orbit it in the opposite direction at 51% of c before finally assuming your own position to be stationary. The bar then rotates faster than light!:) I recently read that existing technology is capable of accelerating an object up to 99% of light speed over the course of a 12 month burn, I wondered how the same technology could be used to spin an object in a laboratory and what kind of speeds have been recorded doing so. I considered how a "light windmill" could be rotated using supercooled magnetic bearings in a vacuum chamber with the 'sails' being fed by laser discharge. I suspected it wouldnt be possible to get speeds close to c because that is the speed photons fire out of a laser meaning I would need either an infinite amount of energy or a limited supply of energy over an infinite amount of time in order to go faster. Then I thought about what would happen if material could be made to spin faster than the speed of light. Would multiple'versions' of the spinning object begin to occupy the same space at the same time as the object starts to 'lap' itself chasing around its axis. After thinking about this, the light speed barrier made more sense to me compared to when I thought about objects accelerating in straight lines. Isnt there also an effect on space-time in the vicinity of rapidly rotating objects? Don't they drag space around with them as they spin? I then started thinking about why matter piles on mass when it accelerates and my head finally popped. The best I could think of is that objects interact with the Higgs field more strongly at higher speeds giving them additional mass. Like driving through rain at high speed - the faster you go the more rain hits your windscreen. Ben - 1 – Mark Beadles Dec 18 '11 at 3:46 Hi Ben, this isn't really an answer and it sounds like you have a few questions. You should search the site for any questions you have, and if you can't find answers, go ahead and ask your questions. – Brandon Enright May 9 at 19:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392709136009216, "perplexity_flag": "middle"}
http://sbseminar.wordpress.com/requests/
## Requests This is a page where anyone can suggest topics on which they would like to see us (and perhaps others) blog. We make no promises about following through on suggestions, but will make a good faith effort. ## Comments» 1. Nathan Dunfield - June 14, 2008 How about a discussion of long-distance collaboration tools and methods, beyond just using email and talking on the phone? It seems like there are a lot things that might work, e.g. pointing a cheap webcam at piece of paper, using collaborative text editors (e.g. SubEthaEdit), IM’ing (some clients have LaTeX support, I tnink), virtual whiteboards (e.g. Scriblink.com), but which might also turn out to be useless in practice for all sorts of annoying technical reasons. So it would be interesting to hear from people who have had success or failure with various methods. [Ed. - answered here.] 2. Mikael Vejdemo Johansson - June 15, 2008 As for IMing, both Pidgin (formerly GAIM) and Miranda have LaTeX support by a plugin that takes \$\$ .. \$\$-sweeps and dumps them through a LaTeX installation before including them as images. That said, the request is hereby seconded. 3. Rolando Calrissian - June 15, 2008 Could you please discuss some of the most important and fundamental theorems in mathematics, and explain why they are important. For example, suppose that you were limited to knowing 5 theorems. Which 5 would give you best insight into the true nature of mathematics? 4. Sander Kupers - June 18, 2008 Maybe you could explain a bit about elliptic cohomology and topological modular forms. Jacob Lurie gave a talk about this at our university a few weeks ago and it seemed like a subject with many interesting things still to be discovered. In particular I’m wondering about promising candidates for spectra representing elliptic cohomology or tmf. 5. Subverting the system. « Secret Blogging Seminar - June 18, 2008 [...] Subversion, often abbreviated as SVN, is a “version control system”. Prompted by Nathan’s request to hear about collaborative software for mathematicians, and the comments on Ben’s post on [...] 6. Thomas Riepe - June 20, 2008 I would be curious about learning more on: “… many constructions of classical algebra (eg, the theory of modular forms) are beginning to be seen to have deep homotopy-theoretic foundations.” http://www.alainconnes.org/docs/Morava.pdf And then I found a remark about “Mixed Hodge Modules” as char. 0 analogs to “perverse mixed complexes” in char. p. What’s all that? Exists there something analog to Deligne’s “yoga of weights”? 7. Mohib Ali - June 23, 2008 Can you talk little bit about the ISI Journals which reply faster than others, in other words they take minimal time for their decision about acceptance or rejection. [Ed.- answered here.] 8. Josh - July 2, 2008 Well, this one is a bit simpler than the others above, but I’d like to see a blog the construction of covering spaces. Everyone (?) knows that nice spaces have universal covers, and we all (?) know the theorems about constructing them. But it’d be nice to see some examples explicitly shown. And, of course, once you’ve shown how to build the universal one, all the others follow by modding out by an appropriate subgroup. 9. H - July 2, 2008 Someone has posted a proof of Riemann Hypothesis on arxiv: http://arxiv.org/abs/0807.0090 Does anyone know this guy? It seems that he is from Brigham Young in Utah. 10. Generalized Homology Theories « Secret Blogging Seminar - July 2, 2008 [...] Requests. Tags: Algebraic Topology, Generalized Homology trackback Recently there have been some comments on our requests [...] 11. A.J. Tolland - July 2, 2008 H, That Riemann hypothesis paper looks like honest work, rather than just crackpottery, so I’m sure the experts are having a look at it. But I’m not going to get too excited about the proof until a few pros tell me it’s legit. 12. Chris Schommer-Pries - July 6, 2008 Can someone explain about category O? Ben, I’m looking at you. I missed your series of talks on it when you were at Berkeley and I don’t really know a lot aboutit. What is it and why should I care? 13. Thomas Riepe - July 14, 2008 SGA 7.1, exp. IX, contains (in it’s introduction and section 13.4) remarks about ideas and conjectures of Deligne on a “theorie de Neron pour motifs de poids quelconque”. It would be great if someone would explain that. 14. frank - July 14, 2008 Could someone tell me who got the prizes at the European Congress of Mathematics today? And possibly describe their achievements at the level of beginning grad student and/or point to relevant papers? Thanks! 15. Ben Webster - July 14, 2008 Thomas- That’s a pretty tall order. I got about as far was translating the title (roughly, “Neron theory for motives of any weight”). I think you’re probably better off asking an actual algebraic geometer than any of us. 16. Ben Webster - July 14, 2008 By the way, the portion of SGA Thomas is refering to can be found here: http://modular.math.washington.edu/sga/sga/7-1/7-1t_313.html 17. anon - July 15, 2008 For the person who asked about the ECM, the prize booklet is now online at http://www.5ecm.nl/prizewinnersbook.pdf and contains short summaries of their work and their lecture abstracts. Briefly, the winners are: Avila, Borodin, Green, Holtz, Klartag, Kuznetzov, Naor, Saint-Raymond, Smoktunowicz, and Villani. 18. Thomas Riepe - July 25, 2008 In case someone knows a good japanese encyclopedia of mathematics (in japanese, for learning reading japanese math), I’d be happy about the bibliographic infos, incl. the ISBN, so that I can ask the library here to obtain it. 19. Daniel Moskovich - August 1, 2008 L Theory (K theory of quadratic forms). I’ve never understood it (despite considerable effort at one point), and I want to understand it on a conceptual level, and what it’s good for! 20. ugeesana - August 6, 2008 Dears, Can anybody told me how to type latex on the blog? I saw the announcement here(http://faq.wordpress.com/2007/02/18/can-i-put-math-or-equations-in-my-posts/), but when I copy the code”$i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right>$” to the html board of my blog, I really cannot see what he produced…Can you help me? Here is my blog(http://ugeesana.wordpress.com/), thank you. 21. ugeesana - August 6, 2008 ….but I can see the result here>< why?! 22. John Armstrong - August 6, 2008 23. davidspeyer - August 6, 2008 I just put a post with LaTeX in a comment thread on your blog and it seemed to work fine. Does this not work for you? (A few points: I find wordpress’s “Visual” editing environment works better than the “HTML” environment. It might not work to copy the HTML source from our blog to yours, because the HTML works by linking to images on the wordpress server and I’m not sure that you have access to them. However, you can see our latex source by holding your mouse over one of our equations, in which case you can copy that LaTeX source.) 24. Anton Fonarev - August 7, 2008 Hi, Maybe you could tell something about current research programs? That might be interesting, I haven’t even seen a nice list of them yet. 25. Scott Carnahan - August 7, 2008 Anton, Could you make your question more specific? For example, did you mean large-scale long-term structures like the Langlands program, or our personal projects (or those of people we know)? A good sample of fashionable problems can often be found by looking at the websites of institutes like Fields, IAS, or MSRI, since they tend to bring in specialists for 6 months to a year at a time for special programs. AIM often puts open problem lists on their workshop pages. 26. Anton Fonarev - August 7, 2008 Scott, Sure, I’m talking about large-scale. Langlands, Mirror Symmetry and others. Thank’s for the links, some of them I didin’t know before. 27. ugeesana - August 11, 2008 Dear Speyer, Thank you for coming here to test and tell me. Gee can use latex in my post after try it now 28. John Mangual - September 30, 2008 Could someone explain root systems or quivers or like all of representation theory? 29. Ben Webster - September 30, 2008 I could, but then I would have to kill you…. But seriously, a little bit about quivers I may be able to handle. If I ever finish all the papers I’m working on. 30. David Speyer - October 1, 2008 My reply is similar to the second part of Ben’s :). I don’t think I’ll be writing any blog posts for a while, but I’ll move quivers to the front of the line for when I do. In the mean time, here are two good references: Derksen and Weyman’s article in Notices and Kac’s paper “Infinite root systems, representations of graphs and invariant theory” in the 1982 Journal of Algebra (does not seem to be available online). 31. John - October 1, 2008 Ok David. I think I found a copy http://gdz.sub.uni-goettingen.de/no_cache/dms/load/img/?IDDOC=171997 32. Jaime Montuerto - November 1, 2008 Hi Guys, This problem/question interests me for a long time. Do all primes divides a Mersenne number? All composites of course is multiple of a prime, but a group or set of composites say like sums of squares, Fermat numbers or function of composite would have a subset of primes as factors. E.g. 7 does not divide any Fermat number or sums of squares, it seems. I would love to see your opinion on this. Regards 33. davidspeyer - November 1, 2008 Jaime, I think you are asking whether every prime number $p$ divides a number of the form $2^k-1$ for some $k$. If so, the answer is yes (except for $p=2$, of course). By Fermat’s Little Theorem, $p$ divides $2^{p-1}-1$. If you meant to require that $k$ be prime as well, the answer is no. It is easy to check that $11$ divides $2^k-1$ if and only if $10$ divides $k$. Since no prime is divisible by $10$, there is no prime $k$ such that $11$ divides $2^k-1$. Regarding sums of squares, you are indeed correct that $7$ does not divide $a^2+b^2$ unless $7$ divides both $a$ and $b$. Let me encourage you to try factoring a lot of numbers of the form $a^2+b^2$; there is a very nice pattern in which primes show up. I’ll tell you what it is if you like, but it might be more fun for you to find it yourself. 34. davidspeyer - November 2, 2008 The discussion of quivers is posted. 35. Jaime Montuerto - November 2, 2008 Dear David, Thank you for the elegantly simple proof. Somehow I knew about the connection with the Fermat’s Little Theorem but didn’t see the way you did it. I think there is a big consequence on this though. If all primes divides all composites and Mersenne primes divides itself. Then there is a direct link to all primes to all Mersenne numbers. Then I think the distribution of primes is linked to the distribution of Mersennes. And the number of primes is equal to the number of mersennes. On the lower range of course there are more primes because Mersenne with prime order some are composite, but as the primes becomes scarce there numbers almost equal. This is a bold statement and I would like to hear from you again. Again thank you. Best 36. Nicholas Proudfoot - November 14, 2008 What is a moduli space? 37. David Speyer - November 16, 2008 I assume you have already read David Ben-Zvi’s excellent article? 38. Nicholas Proudfoot - November 20, 2008 Yes, in fact I have. I thought it would be a good topic for your blog, but I see your point that an accessible treatment of the subject is already available. 39. Jason Starr - November 20, 2008 What is a moduli stack? What is a moduli n-stack? Which moduli stacks are algebraic? How does one determine this in general? How does one carry out the general approach in some important special cases? What are the techniques for studying moduli spaces which are special to characteristic 0? What techniques are special to positive characteristic? What is the historical background of moduli spaces (in particular, where did the term “moduli” come from)? What is the current status of moduli of higher-dimensional varieties? What are the basic open problems for moduli of higher-dimensional varieties? 40. Scott Carnahan - November 21, 2008 At first I thought this was a joke or some kind of brain dump, but now I think you are making the point that David Ben-Zvi’s article doesn’t cover a lot of questions that you (and other mathematicians) find interesting or even central to the study of moduli. I’m afraid most of them are “above my pay grade” and in particular, the notion of n-stack is, as far as I know, not precisely defined. We can safely say that a moduli n-stack is something that parametrizes families of geometric objects of an (n-1)-categorical nature, but I don’t really know what that means, even when n=1. 41. David Speyer - November 22, 2008 Well, I certainly don’t think all of Jason’s questions could be answered in one blog post, and I don’t know enough to answer any of them right now. But they are the sort of questions that I was (badly) hinting at with my question to Nick: what sort of things do people want to know about that aren’t in David Ben-Zvi’s great intro? Here are some blog posts that I could imagine writing or, even better, could imagine some of the rest of you writing: * What are the differences between a fine moduli space, moduli stack, coarse moduli space and a miniversal deformation space? What theorems exist relating them? I basically know this stuff, though I’d want to look up a lot of technical points. * What are some specific, well worked, examples of moduli spaces of higher dimensional varieties? (Polarized abelian varieties, hyperplane arrangements, curves in the plane.) Why is, for example, the moduli space of polarized K3′s considered so much harder to work with? I only have a vague knowledge of this stuff, but I’ve been wanting to learn. I wonder if Paul Hacking could be convinced to write a guest post… * There has been a great deal of progress on the minimal model program recently. What does it imply for moduli spaces of higher dimensional varieties? I don’t know this stuff at all, but I bet it would make a good blog post if some one else wrote it. As for Jason’s questions about characteristic dependence and $n$-categories, I’m afraid I don’t even know enough to know whether they could be answered in blog posts. But maybe some of you do? 42. Urs Schreiber - November 22, 2008 the notion of n-stack is, as far as I know, not precisely defined. There is the Jardine-Toen approach to \$\infty\$-stacks, which defines the category of oo-stacks as the localization of the category of simplicial presheaves on stalkwise weak equivalences of simplicial sets. Then of course there is Lurie’s “Higher topos theory”, which really is meant as “Higher Grothendieck-topos theory” i.e. higher sheaf theory. I suppose that solves the question of what an (oo,1)-stack is. Building on Ross Street’s Categorical and combinatorial aspects of descent theory one obtains a nicely elegant and concrete formulation of the descent condition for “rectified omega-prestacks”, i.e. for presheaves with values in strict oo-categories. That leads to a notion of omega-stacks then. While not being fully general, one can do nice things with that. I am currently using this for talking about differential nonabelian cohomology. Our hope is that, using Simpson’s conjecture and the fact that every oo-stack should be equivalent to a rectified one (where the restriction maps associate strictly) all one needs to do about omega-stacks in order to get full general oo-stacks is to allow them to take values in omega-categories with weak units. 43. Jason Starr - November 22, 2008 Dear all, I wasn’t making a joke or anything. Those are the questions I would have liked to hear the answers to (even if I didn’t know the precise meaning of the question) when I was learning about moduli. There are excellent references out there. But there are few that really explain the key techniques and illustrate them thoroughly on important special cases. 44. CM - November 23, 2008 On a more mundane note – the conference list is a bit outdated. Do you guys know of anything exciting happening next spring or summer? 45. Scott Carnahan - November 23, 2008 Urs, Is there are good reason why every infinity stack should be equivalent to a rectified one? I know very little about higher categories, but I was under the impression that strictification is a bad idea or prone to failure beyond 2-categorical constructions. I recently learned that Clark Barwick has a “well-behaved” theory of (infinity,n)-categories based on a generalization of complete Segal spaces, and some details are written up in the beginning of Jacob‘s recent paper: (infinity,2)-categories and Goodwillie calculus. I don’t know how this precisely relates to moduli stacks, since questions of geometries and topoi with non-invertible higher morphisms are quite beyond me. 46. Urs Schreiber - November 24, 2008 Is there are good reason why every infinity stack should be equivalent to a rectified one? Yes, I think one expects a higher dimensional Yoneda lemma. For X an n-stack its rectification should be Hom_{nStacks}( Repr(–), X ), where Repr is supposed to denote the functor embedding the underlying site into n-stack over it. I was under the impression that strictification is a bad idea I wouldn’t put it as general as that. More concretely, for stacks one should, I think, beware that among the two characteristics: they are preshaves with values in categories a) they satisfy higher descent b) they may respect compositiion possibly only weakly. the first one is the essential one capturing the point of stacks as higher sheaves, while the second one is there more for technical reasons. Compare for instance the Jardine-Toen approach to infinity-stacks which I mentioned: this is based on presheaves with values in simplicial sets. The simplicial sets represent the oo-categorical values (afte localization) but the presheaves themselves are true presheaves, i.e. true functors, i.e. strictly associative. Still, these gadgets can model oo-stacks in this setup. 47. Some Grad - December 3, 2008 Hi, I have been struggling to understand the uses of special functions their representation theoretic significance and geometric significance. I haven’t really succeeded in finding a clear overall view. More significantly I couldn’t find any good reference that gives a unified survey. I will be really benefit from such an exposition. Thanks 48. john mangual - December 5, 2008 I would like to hear about Toric Varieties and their relationship to Amoebas (or the other way around). For context, I am aiming at reading the papers on Tropical Geometry by Gregory Mikhalkin and others. Another word I get stuck on is “Gromov-Witten Invariant”. I have probably said a mouthful. 49. David Speyer - December 5, 2008 You can understand a lot about amoebas (and tropical geometry) without knowing anything about toric varieties. Most of the time, all you need to know is the connection between the Newton polytope of a polynomial and its amoeba or tropicalization. Results about polynomials with given Newton polytope can be rephrased in terms of sections of line bundles on toric varieties, but that formulation isn’t helpful in the early stages. I can see two posts to write here: (1) My standard intro to toric varieties. I’ve taught people the toric formalism a bunch of times now. I should definitely do a blog post on this. (2) An intro to amoebas and tropical hypersurfaces, developing the relevant technology in parallel. I can also definitely do this. As always, don’t hold your breath :). 50. David Speyer - December 5, 2008 Some grad — I’m not sure what you’re looking for. My best guess is that you want to understand how modular/automorphic forms show up in representation theory. I’m not qualified to write that, but Scott Carnahan is. Is that what you are looking for? If you don’t know exactly what you are looking for, can you start by telling us what sort of special functions you care about? 51. A.J. Tolland - December 5, 2008 Hi John, I’ve been meaning to write about Gromov-Witten theory for a while now. I’ll put together a post sometime over the weekend. 52. Charles Siegel - December 5, 2008 Though it’s stalled due to some administrative stuff, I’m writing up my lecture notes that (with a few gaps) prove Kontsevich’s formula and posting them. They’ll probably all be scheduled for posting, at least, over the course of next week. All will be up before New Year’s. 53. Some Grad - December 5, 2008 Hi David, First I apologize for a dumb mistake. I meant symmetric functions and not special functions. Having said that I wanted to understand how they arise in various geometric situations and the interplay between their combinatorial properties and the geometry of Grassmanian. On the representation theory side I looked into the monograph “Symmetric functions and orthogonal polynomials ” but guess I got lost in the myriad of combinatorial identities. To put it in a concise way I am probably seeking a motivation/exposition about the connection between combinatorial properties of the symmetric functions and how they determine and are determined by the geometry of flag varieties and the related representation theory of Gl_n. Thanks 54. David Speyer - December 6, 2008 No need to apologize, just trying to understand! Here is a quick answer, and some references. Note that I am being sloppy about boundary cases, and about what category I am working in; I think it is probably better to give the big picture first and then get the boundary cases right. So the question is to explain why the following objects match up: (1) Symmetric functions in d variables. (2) Representations of GL_d. (3) Vector bundles on G(d, infinity). (4) Cohomology clases on G(d, infinity). More precisely, we have four rings, each of which has a natural sub-semiring. (1) Symmetric polynomials in d variables. The sub-semiring is symmetric functions which are positive in the Schur basis. (2) The semiring of representations of $GL_d(\mathbb{C})$, with direct sum as addition and tensor product as multiplication. We get the ring by adding formal negatives of repreentations. (3) The semiring of complex vector bundles on $G(d, \mathbb{C}^{\infty})$. Again, direct sum is addition, tensor product is multiplication and we get the ring by adding formal negatives. This is called K-theory. (4) The cohomology ring of $G(d, \mathbb{C}^{\infty})$. Here the sub-semiring is cohomology classes of analytic (or algebraic) subvarieties. The four rings are basically isomorphic. (Basically? Remember what I said about sweeping some issues on the rug. But if I did everything right, then they would be isomoprhic.) As additive semigroups, each of the semirings is $\mathbb{Z}_{\geq 0}^{\infty}$. Thus, they have canonical additive bases, and the isomorphisms must take one base to another. I wrote about the connection between (1) and (2) here. That post also talks about the relation to flag varieties. The relation between (2) and (3): The relation is that $G(d,\infty)$ is $GL_d \backslash M^0(d \times \infty)$. Here $M^0(d \times \infty)$ is the space of $d \times \infty$ matrices of rank $d$. Vector bundles on $G(d,\infty)$ correspond to vector bundles on $M^0(d, \infty)$ equipped with a $GL_d$ action. Now, $M^0(d, \infty)$ turns out to be contractible. If you know enough topology, this gives you an equivalence between vector bundles on $M^0(d, \infty)$ with a $GL_d$ action and vector bundles on a point with a $GL_d$ action. A “vector bundle on a point” is just a vector space, so this is a fancy way of talking about ring (2). Connecting (4) to the other three is much harder. A sign of this is that the connections between the other three generalize to all reductive groups, but the connection to (4) never do. I should say that, for any space X, there is a relation between the ring of vector bundles on and the cohomology ring of X, called chern character. But this is not the isomorphism I want; it does not play correctly with the semi-ring structure. There are a number of papers which try to give the simplest explanation of why (4) matches the others, my favorite is by Harry Tamvakis. 55. CHL - December 6, 2008 Dear Hosts and readers; Pretty sorry to bother you. I just posted a question (response 47) on AJ’s article on stacks (2008/06/19) http://sbseminar.wordpress.com/2008/06/19/whats-a-stack/#comment-4336 Any illumination or reference guide is appreciated. Best thanks. — Sincerely, CHL, 2008.12.06 56. Some Grad - December 12, 2008 Thank you David for the references. I will also mention a curiosity that I have had for sometime now. It may sound stupid but let me try and express it anyway (after reading AJ’s post on GW invariant’s). What is a Field theory ( assuming it is more basic than QFT ,CFT and TQFT) and why mathematicians should at all care about it? Now I am not asking for a formal definition what I want to understand is that what is the motivation behind defining these objects and what is their mathematical significance. 57. jkl - December 27, 2008 Could someone explain (in terms that an algebraic geometer could understand) how the theory of quivers is used to classify indecomposable torsion-free modules over the completed local ring of a simple curve singularity (as is done in Green and Reiner’s paper _Integral Representations and Diagrams_, Jacobinski’s paper _Sur les ordres commutatifs avec un nombre fini de resaux indecomposables) or Greuel and Knoerrer’ paper _Einfache Kurvensingularitaeten und torsionsfreie Moduln_). Thanks! 58. John Baez - December 31, 2008 Some Grad wrote: What is a Field theory ( assuming it is more basic than QFT ,CFT and TQFT) … Once upon a time, a “field” was a function from a manifold called “spacetime” to some vector space. The classic example is the electromagnetic field. A “field theory” is a bunch of equations satisfied by some field. The classic example is Maxwell’s equations, which the electromagnetic field satisies. Subsequently, this simple idea of field theory has been generalized and extended in many directions. The most important and shocking generalization is “quantum field theory”. The old idea of field theory described above is now called “classical field theory”. Quantum field theory is similar, but mind-bendingly different. and why mathematicians should at all care about it? First of all, everything in the universe is made of fields. So, you are made of fields. You should want to understand yourself. So, you should want to learn field theory. Second of all, even if you don’t want to understand yourself or the universe, the attempt to understand field theory has led to astounding developments in mathematics ever since field theory was invented sometime in the 1700s. Differential equations, Hilbert spaces, Riemannian geometry, connections on vector bundles, operator algebras… none of these subjects would have developed to their current glory without field theory. Trying to understand these subjects without knowing some field theory is like trying to learn musical scales without ever listening to music. So, you need to learn field theory. 59. another math grad student - January 1, 2009 Dear Professor Baez, What a beautiful answer that was. I am a math grad student looking to learn some field theory. What math books would you recommend. Please also let me know the math background required. I know no physics beyond high school AP physics. Thank you for the beautiful answer. The last line about music really touched me. 60. Jaime Montuerto - January 4, 2009 Hi, Can anyone give an idea on this equation that relates to Fermat’s Last theorem diophantine equation: a^n + b^n = c^n, where a,b,c,n all positive integers and n > 2. The equation being, gcd(c^n – 1, a^n + b^n – 1) =1. Can anyone factors out or show a counterexample that is greater than 1 that is positive ODD integer? Thanks 61. Scott Carnahan - January 5, 2009 Jaime, The “requests” page is intended for suggested blog topics, although we are open to the possibility of some peripheral discussion. If you have specific questions in number theory, the NMBRTHRY list (find it on Google) would be a better home. However, it would be best if you took some time to make your question more precise. For example, in the above question, you did not specify which number you wanted to be odd, or what form a counterexample should take. 62. nickwallacesmith - January 6, 2009 Hi Scott, Thanks for suggestion I’ll look into it. Regarding the question above I mean the gcd equation implies the FLT, more specifically if a greater than 1 positive Odd factor is common with both functions, it seems will imply that FLT is not true. The form of counterexample is simple positive ODD integer. This second equation is based on Fermat’s Little Theorem. I am wondering whether these functions are known, the first function, c^n – 1 is similar to cyclotomic binomials and I’ve seen this before but not the a^n + b^n – 1 function. Have you seen the second expression before? It seems that the gcd is always 1, the proof is beyond me although I’m am working on ‘grammar’ model (combinatronics aspect) and too early for me to say. Again thanks. 63. Scott Carnahan - January 6, 2009 I’m still not sure if I understand you correctly, but the following is made out of all odd integers: gcd(7^3-1, 5^3+5^3-1) = 3. Other examples are quite easy to compute, and they don’t say anything about the validity of Fermat’s Last Theorem or Fermat’s Little Theorem (both of which are true). The equation a^n + b^n – 1 = 0 has rational solutions that give solutions to the Fermat equation (i.e., the only ones are trivial), but plugging integers into the variables and taking the gcd with integers of the form c^n-1 seems to be a rather meaningless exercise. A general principle is that if you want to solve an “interesting” diophantine equation, then elementary tricks like taking gcds or using congruences will not work, because of the possibility of local-global obstructions. For example, Selmer showed that the equation 3a^3 + 4b^3 + 5c^3 = 0 has solutions mod N for all positive integers N, but no integer solution. 64. John Mangual - January 19, 2009 Could someone talk about Schubert Calculus? It sounds rather musical, but really it has to do with Grassmanians. Also, why might someone want to integrate with respect to Euler characteristic? 65. Richard - January 25, 2009 Can any of the geometric Langlands aficionados give a glimpse of how some of the geometric intuition is used in the original number theory setting? Most of these questions came from reading Frenkel’s review hep-th/0512172 Grothendieck’s fonctions-faisceaux [function-sheaf] dictionary: Given a complex of Weil sheaves on a variety over a finite field we can get a function by taking trace of the frobenius element applied to the complex. How do we go the other way? Are all of these sheaves sums of sky-scraper sheaves and their translates? How is Deligne’s l-adic Fourier transform related to the Fourier-Mukai transform? Are their any easier applications of Deligne’s FT than the Weil conjecture for curves? References that should help, but are impenetrable to me: [1] Kiehl and Weissauer’s book on etale cohomology [2] Katz: Travaux de Laumon [3] Laumon: Transformation de Fourier, constantes d’equations fonctionnelles et conjecture de Weil Thanks! 66. Ben Webster - January 26, 2009 Richard- I’ll try to write a more serious answer to these questions soon, since it’s something I was already thinking about. Just to quickly answer your questions: Given a complex of Weil sheaves on a variety over a finite field we can get a function by taking trace of the frobenius element applied to the complex. This is true, though you might want to think about it slightly differently; if you have a variety defined over $\mathbb{F}_q$, and a Weil sheaf on it, you want to think about not just the $\mathbb{F}_q$-points, but the $\mathbb{F}_{q^n}$ for all \$n\$. These are the fixed points of the \$n\$th power of the Frobenius of the $\bar {\mathbb{F}}_q$-points of the variety, so you can take the trace of Frobenius on the stalks at those points to get a function on these for all \$n\$. You want to think about all these simultaneously. This is a bit like how knowing the traces of powers of a matrix tells you all its eigenvalues, whereas just knowing the trace of the matrix itself tells you a lot less. How do we go the other way? Are all of these sheaves sums of sky-scraper sheaves and their translates? So, I think from the above, you can see that this is not going to get you everything you want. You can arrange skyscraper sheaves to work for a few values of \$n\$, but you can’t get all of them simultaneously. I’m not sure exactly how to describe the image, but it’s a bit trickier than what you said. How is Deligne’s l-adic Fourier transform related to the Fourier-Mukai transform? I’m not sure there’s any connection other than analogy. They even generalize different versions of Fourier transform (Mukai is $S^1$ Fourier transform, and Deligne is $\mathbb{R}$ Fourier transform). Are their any easier applications of Deligne’s FT than the Weil conjecture for curves? I don’t know about easier…Deligne’s theory of weights is a lot cleaner on projective varieties with no odd cohomology, but on the other hand that opens a whole higher dimensional/intersection cohomology can of worms that may not interest you. By the way, you may want to look at the “function-sheaf correspondence” section in my recent paper with G. WIlliamson. It’s concise, but at least it’s shorter than Kiehl and Weissauer. 67. lewallen - January 26, 2009 I was wondering if anyone would give an exposition of canonical bases in quantum universal enveloping algebras, plus some/any applications? My personal interests would enjoy applications to integrality/calculability of knot invariants etc. , but anything would be great. Thanks a lot for the great blog! 68. Sami - January 29, 2009 Does anyone happen to know how to write down a point in the affine Grassmannian? More precisely, I’m looking for an analog of the canonical form for kxn matrices in the Grassmannian setting, but I am at a complete loss as to how to do this in the affine case. A post about how to think of points, lines, etc in the affine case would be fantastic. Thanks! 69. Eric Zaslow - January 31, 2009 As a sporadic blogger (under a mild pseudonym), I know how nice it is to receive a comment — even one (such as this) with no added content. So, I will relate how impressed I was reading this blog for the first time. Of course I don’t know much about the subject(s), but the blog is exactly what a bunch of really smart math friends can do with organizational and pedagogical talent. Hmm… trying to say something math-y now…. Nope, got nothing. Just a request — opers, what and why in under 100 words. Thanks! 70. John Mangual - February 3, 2009 why do people care about sheaves? and how are they perverse? also what is a scheme. 71. davidspeyer - February 4, 2009 mohammed akbari sehat writes: I think that Selig’s notes do a good job walking you through the basic definitions quickly. If you want to get some motivation for the definition, you might like Scott’s old post Group = Hopf Algebra. Do you know the motivation for the definition of a Hopf algebra, by thinking about functions on abelian groups? If $V$ is the vector space of $k$-valued functions on a finite abelian group $G$, then we have the following operations: inclusion of the function which is ${1}$ on every element of $G$, giving a map $i: k \to V$; pointwise multiplication of functions, giving a map $m: V \otimes V \to V$; evaluation at the identity, giving a map $\epsilon: V \to k$; the map $\Delta(v)(g,h) = v(gh)$, giving a map $\Delta: V \to V \otimes V$; and the map $S(v)(g)=v(g^{-1})$, giving a map $S: V \to V$. If you think through what properties these maps obey, you’ll see that they satisfies the axioms of a Hopf algebra, plus the condition that $m$ and $\Delta$ are commutative. Moreover, these axioms are self dual, in that, if you take the transpose of al these maps, you’ll get another such class of maps, with $i$ and $\epsilon$ changing places, as do $m$ and $\Delta$. Canonically, this operation replaces $G$ by its character group. If we no longer insist that $G$ is commutative, then everything stays the same, except that $\Delta$ is no longer commutative. Symmetry suggests that we should also drop the assumption that $m$ is commutative; this gives us the axioms of a Hopf algebra! Of course, maybe you already know all of this. (Your question wasn’t very specific.) I don’t know where you should go to read about the structure theory of Hopf algebras, but Noah, Scott M. and our frequent commentor Greg Kuperberg probably all have ideas. 72. davidspeyer - February 4, 2009 Just wanted to make sure that John Mangual, and everyone else who was interested, saw Charles’ great series of blog posts on Gromov-Witten theory. He provides a lot of the technical algebraic details to go with AJ’s great big picture answer. 73. John Mangual - February 5, 2009 This is great. Now I can say it in one sentence. A Gromov-Witten invariant is an integral of a product of cohomology classes over the Moduli space of marked curves. Actually I have no idea what I just said. Some implications: cohomology classes can be multiplied, cohomology classes can be integrated, the moduli space of [stable] curves can be parameterized. The moduli space is a space of curves up to isomorphism as compact Riemann surfaces. So M(g,n) is some 3g – 3 + n manifold. We can also talk about the moduli space of stable maps from one of these curves to a ‘nice’ (I have no idea what ‘nice’ means here) complex variety. Thus, a cohomology class is just a function which takes a stable curve to an integer. I’m not sure how this is integrated. 74. Charles Siegel - February 5, 2009 John, need to go ONE step further…it’s an integral over the moduli space of stable maps from marked curves. Also, $M_{g,n}$ is only an orbifold: it can have singular points, but they aren’t THAT bad. As for the niceness conditions…we’re loosening them rapidly. The cohomology classes form a sort of ring (not quite commutative, but we only care about the commutative part, in general) and then integration is a function that takes classes to numbers (strictly, rational numbers, but they’re integers in the cases I talked about). What the cohomology classes are, really, are things representing collections of curves, and integration takes these collections to integers, and generally it’s just counting isolated points (it kills any positive dimensional families of curves). Let me know if there’s more I can do to clear it up. As for some of your other questions, here’s the answers that I can give: Sheaves are wonderful bookkeeping devices. The first use that most people see for them is keeping track of locally defined functions and when they can be patched together to functions on larger sets. Then they turn out to have all sorts of amazing properties. Basically any time you have local information and want to know about when you can patch together, sheaves pop up, including vector bundles and, more generally, $G$-bundles. The formalism is very flexible and can represent a lot of different things, which is why people care. As for perverse sheaves, I know nothing. For what a scheme is, I wrote about that here, hopefully you’ll find that helpful. Now, earlier, you asked about Schubert Calculus…I’m planning some posts on that, once I dig my way out of the pile of Gromov-Witten, Donaldson-Thoman, and Stable Pair papers I’m piled under, but though it’s very ad hoc and doesn’t give much general detail, I did write this which involves some Schubert Calculus. 75. John Mangual - February 6, 2009 Yeah I guess I’ve asked a lot of questions over time. For now I guess I don’t really have anything to ask about sheaves so I’ll focus on the enumerative stuff. It seems like Schubert Calculus and Gromov-Witten theory both count things. Schubert Calculus seems to revolve around the cohomology of the Grassmanian G(n,k) – and somehow Chern classes come into play here too. So there seems to be two problems: understanding the cohomology ring itself (which *must* be well understood by now) and then learning how to translate enumerative problems into this language. 1) What is the structure of the cohomology ring of the grassmanian G(n,k)? 2) How can I use this structure to answer enumerative problems about hyperplanes? (In other words, how to encode real geometric problems in terms of cohomology ring?) I ask these questions because these are probably well understood objects where good explanations may appear in a book somewhere. On the other hand, I like talking to real live people too. You can’t ask a book to clarify. For Gromov-Witten theory I have basically the same questions. I imagine that it’s probably a feat if you use this theory successfully to compute things. At the end of all this investigation, I just want to have a few realistic exaamples of gromov-witten calculations that I can hang up in the living room next to the television or by the window. Or maybe it’s more like a household appliance… 76. Charles Siegel - February 6, 2009 The Schubert Calc stuff will be covered in a post on my blog in the near future. As for GW theory, well, it counts curves, though it’s current and being studied rapidly right now, so we’re still pushing what exactly it can count and under what circumstances. As for actually counting things with it, it’s not as bad as it sounds. For instance, there was my post on Kontsevich’s Formula, but also there’s a relation to my most recent post, the one on the Clemens Conjecture. The number of rational curves in a quintic threefold is predicted by the GW invariants, which can be calculated via the conjectured Mirror Symmetry. Sadly, we’ve not yet proved that these are the right numbers, but my understanding is that they are for as far as we’ve managed to show that the number is finite. 77. Jason - February 8, 2009 I’d like a new El Naschie post summarizing latest developments. 78. Noah Snyder - February 9, 2009 Anyone know of any expository sources on Deligne’s paper “La Categorie des Representations du Groupe Symetrique S_t, lorsque t n’est pas un Entier Naturel.” The combination of lack of pictures, lots of technical category theory, and being in French is a bit intimidating (any two of those three I’d be fine with). 79. David Speyer - February 9, 2009 The toric variety post (request 48) is now up. 80. David Speyer - February 18, 2009 There is now a sequel to the toric variety post. At the moment, no further posts on this subject are planned. Peter Arndt - February 23, 2009 Hi, Here are two papers by F. Knop extending Deligne’s construction (I saw him give a talk about this in Göttingen): http://arxiv.org/abs/math.CT/0605126 http://arxiv.org/abs/math.CT/0610552 They match your criterion by being in English (I would have preferred the pictures…), the first one is purely technical but only 6 pages short, the second is more reader-friendly. Charles Siegel - February 24, 2009 John, if you’re still interested, I’ve just made the first post in my series going to schubert calculus. It’s here. 81. davidspeyer - February 25, 2009 Noah — I’ve been reading the Knop references Peter Arndt cites in 78, and I have a pretty good idea what Deligne is doing. I’m working on a post now. 82. Noah Snyder - February 25, 2009 Excellent! I looked at those papers for a little while too, and they look very interesting. In particular when I’d looked at Deligne’s paper I had no idea that his construction had a chance of applying to more than just the case of S_n. David, I look forward to reading what you have to say about them. Thanks Peter for the links! I may later on have a follow-up about another way to think about S_t with more pictures. 83. Greg Kuperberg - February 27, 2009 A request for a posting about math jobs: It bugs me that so many of the positions on the math jobs wiki are simply listed as “filled”. Of course it’s a touchy subject if someone is interviewing or has gotten an offer or whatever. But once someone has accepted a position, I don’t see that there is any controversy left at all, either for the candidate or for the department. Could people post their own names when they get a job; or could departments post who they have hired? One of my main motivations in setting up the math jobs wiki is to let people know, if they didn’t get hired somewhere, who got hired instead. I think that people have the right to know why they didn’t succeed. 84. john mangual - March 3, 2009 So I have a torus and some metric on it. By uniformization it is conformally equivalent to a flat torus with a certain aspect ratio. Is there any way to compute it from the metric alone? More generally, given a surface and a complex structure on it, how can I compute the moduli space representative (or even just the Teichmuller coordinates) from the metric? 85. an_occasional_mathematician - March 10, 2009 This blog already contains quite a bit of useful career advice, and it would be great to see here a discussion of how one gets to give *invited* lectures at the conferences. I hope one could learn more than the standard replies like “do the outstanding work, and the invitations will follow…” Many thanks in advance! 86. Jason - April 12, 2009 This is a request to Ben Webster on the L’Affaire El Naschie post. You are not keeping it up-to-date; one of the first links is broken, because Baez has pulled his stuff down. Would you please send people over to El Naschie Watch with a note PREPENDED to the article rather than just the existing note at the end of the comments? This will save everyone a lot of frustration since you are not keeping broken links up to date, and don’t want to be the go-to guys for El Naschie information anyway. I keep an archive of the stuff Baez pulled down, and much much more. Thanks! –Jason 87. Bradley Froehle - April 21, 2009 Just a quick head’s up that the Tricki (http://www.tricki.org/) is now online. 88. DD - April 29, 2009 A typographical question: what is the name of the font used in old IHES publications, e.g., EGA? Is there a tex style file that reproduces that style? 89. John Armstrong - April 29, 2009 DD: it’s been a while since I’ve looked at EGA, but if I remember correctly the font it used was called a typewriter. More seriously, if I’m correct I think the most common typewriter typeface was very closely related to Courier, if it wasn’t Courier itself. 90. DD - April 30, 2009 As regards Post #91: Dear John, I think you’re confusing EGA with SGA; here is an example of the typesetting in question http://archive.numdam.org/article/PMIHES_1960__4__5_0.djvu 91. John Armstrong - April 30, 2009 Sorry, I don’t have a viewer capable of opening DjVu files. Maybe you have a nice, normal PDF around? 92. jc - April 30, 2009 http://archive.numdam.org/article/PMIHES_1960__4__5_0.pdf 93. Ben Webster - April 30, 2009 Sorry, I don’t have a viewer capable of opening DjVu files. No time like the present to make a change (here). Scans of books tend to gigundous in PDF format. DJVu’s are much smaller. 94. David Speyer - April 30, 2009 DD – I reposted your question at a blog where a lot of typesetters hang out. The best guess is Baskerville. 95. Andy P. - April 30, 2009 DD and David — It is indeed a version of Baskerville. New IHES publications are typeset with TeX using SMF Baskerville, which was developed to copy the typesetting of classic IHES publications. See http://omega.enstb.org/yannis/pdf/article-gut99.pdf for a description of it (in French). I have always thought that the IHES typesetting is by far the most elegant of any mathematical publisher. One can buy the font they use (for a LOT of money!), and several times I’ve considered buying it for my own use… 96. J Nernoff III - May 27, 2009 I’m not a geometer but have an interest in art. I have long wondered how I can make animated art pieces (computer, viewable) using flashing lines of dots like the old Bijou movie marquees with bulbs going off and on to create moving dots and lines. One idea would be to start with a picture (e.g Mona Lisa) and have it break up into lines and rotating lines of dots and get random for a while, then reassembling every now and then to reform the picture. Does anyone have any hints as how to do this (and many other constructions)? Thanks 97. dunfield - May 27, 2009 This is test, ignore. 98. David Speyer - May 27, 2009 Hey Nathan, Your recent comments got stuck in our spam box. I unlocked this one, and one of your three comments on Ben’s post. Sorry! 99. Steven Sam - June 26, 2009 I’d like to know what ind varieties (schemes) are and how one thinks about them / works with them, and also why they are necessary, i.e., why are things like affine Grassmannians not varieties / schemes? 100. Peter McNamara - June 27, 2009 In any category, an ind-object is simply a diagram of objects. You would like to take a limit of this diagram but such a limit may not exist in the category. So morally speaking, just pretend the limit exists and call it an ind-object. The simplest example of an ind-variety that I know is the limit of the sequence of inclusions $\mathbb{A}^1\rightarrow\mathbb{A}^2\rightarrow\cdots$. The affine Grassmannian is of approximately the same form, as a sequence of inclusions of subvarieties of increasing dimension. It is easy to see that the resulting object is too big to be noetherian, to see it can’t be any scheme, as always one reduces it to a corresponding question in ring theory. I believe for such an ind-variety to be a scheme, one would have to have a ring $R$ together with a saturated chain of prime ideals $I_0\supset I_1\supset\cdots$ with the property that all maximal ideals $M$ contain some $I_n$. I claim that such a setup is impossible (Exercise). If one wants to think about ind-varieties, note that you can always make sence of the functor of points of an ind-variety, so you can think about it in this way as an algebraic space (and of course by Yoneda’s lemma, the functor of points tells us everything). In practice, every time I’ve been working with the affine Grassmannian, all the action has been taking place on a bona-fide finite dimensional subvariety, so I’ve been able to get away with only knowing some theory of algebraic varieties. I’m not sure how typical this experience is. 101. Steven Sam - July 1, 2009 Peter: with the affine space example, wouldn’t the limit be Spec of a polynomial ring in countably many variables? 102. Peter McNamara - July 1, 2009 Spec of a polynomial ring in countably many variables can be identified with infinite tuples of elements of our field. The example I am trying to mention is a vector space of countably infinite dimension, which is strictly smaller (ie only finitely many coordinates nonzero). Every point in this space $\mathbb{A}^\infty$ is required to lie in some $\mathbb{A}^n$. 103. grad student - July 2, 2009 Is it too soon to make some predicitons about the 2010 Fields medals? Or at least talk about who you think deserves one? Last time around I feel like everyone was expecting Tao and Perelman to get medals, whereas this time maybe there aren’t analogous clear frontrunners. Maybe I’m wrong? 104. Zygmund - July 5, 2009 I’d be very interested in learning the proof of the Mitchell embedding theorem, and more about category theory in general (especially tensor categories)? 105. Jim Humphreys - July 26, 2009 Younger mathematicians than I sometimes lament the solving of so many big problems in the 20th century, but there are significant problems left over at the interface of representation theory and algebraic geometry. One which has been somewhat neglected since the important work of Henning Haahr Andersen (following up his 1977 MIT thesis) concerns higher cohomology of line bundles on flag varieties in prime characteristic. Though Kempf’s vanishing theorem (which plays a major role in Jantzen’s book “Representations of Algebraic Groups”) shows that the lowest and highest cohomology behave as in the characteristic 0 Borel-Weil Theorem, the intermediate degrees require more subtlety. I’m still convinced, as in my 1986 paper in Adv. in Math. and my more detailed follow-up in a 1987 JPAA paper, that the answer requires Kazhdan-Lusztig theory for the affine Weyl group (of dual Langlands type). I worked out a lot of unpublished details for the rank 2 case of G2, but in general one needs higher-powered methods. This problem is not superficially close to the study of simple modules (Lusztig conjectures for groups and Lie algebras), but seems intrinsically challenging and in need of fresh thinking. In a modified form the parallel problem remains unsolved for quantum groups at a root of unity. 106. Chris Brav - July 27, 2009 Full flags are indeed probably intrinsically challenging. But does someone know at least the cohomology of obvious vector bundles on Grassmannians? Here, obvious means bundles built from tautological bundles using multilinear operations. For projective space, you can work out the cohomology of such bundles in arbitrary characteristic by just starting with the Euler sequence and then playing with long exact sequences. I’m curious whether anyone has had the patience to do the same for Grassmannians or whether it is known that the answer is not characteristic free. 107. David Speyer - July 27, 2009 Chris: I haven’t checked this carefully, but I think all the “obvious” vector bundles you mention are pushforwards from line bundles on the flag variety (or direct sums of such pushforwards)? So cohomology of these vector bundles is related to cohomology of the line bundles Jim Humphreys asks about by a spectral sequence. My guess (without much justification) is that working with full flags is easier than working with Grassmannians here, and that transforming between the two problems is easier than solving either one. 108. David Speyer - July 27, 2009 I think that Griffith’s example of characteristic dependence http://www.ams.org/mathscinet-getitem?mr=573481 pushes down to an example on $\mathbb{P}^2$. Specifically, Griffith’s first example (his Theorem 3.1) is that the line bundle $\mathcal{O}(-2p, p)$ has nonvanishing $H^1$ on $F \ell_3$ in characteristic $p$, but not in characteristic zero. I didn’t dot every i and cross every t, but I thought a bit about what happens when you push $\mathcal{O}(-2p, p)$ down to $\mathbb{P}^2$. It looks to me like you get $\mathcal{O}(-2p) \otimes \mathrm{Sym}^{p} V$, where $V$ is the tautological $2$-bundle. It looks to me like Griffith’s resolution (1) pushes down to a resolution of this vector bundle; and the long exact sequence in cohomology behaves exactly the same was as in Griffith’s paper. Sadly, Griffith’s paper doesn’t seem to be online and I’m too lazy to retype it. But you can take this as a challenge: compute the pushdown of $\mathcal{O}(-2p, p)$ to $\mathbb{P}^2$, and its cohomology. UPDATE (December 9) I just looked back at this answer and found it had a very confusing typo in it. In the first paragraph, last sentence, $F\ell_3$ used to read $\mathbb{P}^2$. Sorry about any confusion that caused! 109. Jim Humphreys - July 27, 2009 David and Chris, I don’t want to overload the detail, since the history is complicated. But Larry Griffith (now teaching computer science at nearby Westfield State College) did a Harvard thesis with Mumford which got other people thinking, including Seshadri and then Andersen, etc. Griffith’s results/methods were awkward in retrospect, while Andersen made more creative use of Demazure’s “simple” proof of Bott’s Theorem. Here the varieties $G/P$ such as Grassmannians play an essential role in the exact sequences, but in characteristic p all gets complicated after the projective line case. The vanishing behavior of cohomology is still not settled beyond the misleadingly nice top and bottom degrees treated by Kempf. And the representations are usually far from irreducible, which drew me to the subject. The papers by H.H. Andersen are the best resource, and many are available online. (I have copies or reprints of everything including Griffith’s thesis.) Footnote: Henning’s family name is Haahr Andersen, under H in the Aarhus phonebook, but the pronunciation in the U.S. is awkward. His wife’s family name was Andersen, which caused difficulty with the driver’s license people in NJ when they spent a year at IAS. 110. Ben Webster - July 28, 2009 but the pronunciation in the U.S. is awkward. Any of you curious about what this means can email me for explanation. This is a family blog, so I won’t say anything more. 111. Chris Brav - August 1, 2009 I had known that in characteristic zero, tensor products of Schur functors of tautological sub and quotient bundles on Grassmannians were pushed-forward from line bundles on full flags, but did not realize that this was the case in all characteristics since I had not known Kempf vanishing (higher cohomology vanishes for line bundles labeled by a dominant weight). However, these particular line bundles on full flags are of a very special sort, where the weight splits into two parts, one of length k, dominant for GL_k, and the other of length n-k, dominant for GL_n-k. Is it the case that cohomology is not known for such line bundles? 112. Jim Humphreys - August 2, 2009 Chris, I’m not sure offhand about what is known in this special case, but I should have emphasized that there is a certain amount of information about low ranks and some other cases in the literature (just very little general theory). Probably the best-informed people are Andersen and Jantzen at Aarhus, Donkin at York. Also, I should emphasize that standard computational methods require some study of cohomology of more general vector bundles. But the vanishing behavior and $G$-module structure in the case of line bundles remain largely unknown, apart from the antidominant/dominant line bundles where Kempf plus Serre duality give the classical vanishing behavior and where the modules are Weyl/dual Weyl modules still being actively studied (for large enough $p$, Lusztig’s old conjecture on composition factor multiplicities via KL polynomials for the dual affine Weyl group). In other situations the early work of Andersen shows mainly in low ranks how nonvanishing can occur in more than one degree and how the module can even be decomposable. Generically the modules are twisted versions of Weyl modules, but otherwise get messy to study. Probably the vanishing can’t be well understood apart from module structure and KL theory, except in isolated cases. 113. Anon - September 2, 2009 Hey, I was wondering if somebody would be interested in writing up a discussion of the transition from “classical” algebraic geometry to “modern/Grothendieck” algebraic geometry. To be a bit more specific, I’m spending the summer working through the fundamentals of scheme theory, but to be quite honest, I’m having a lot of trouble understanding the revolutionary nature of the French school of algebraic geometry. I guess I’d be interested in seeing a post that gave a concrete example of a particular problem, as interpreted in the classical sense, and how it was reinterpreted in the modern sense. Best, Anon 114. Alexander Woo - September 2, 2009 A late followup to Jim’s post on cohomology of line bundles on flag varieties in positive characteristic: A closely related problem is that of finding minimal free resolutions (or just Betti numbers) for determinantal ideals in positive characteristic. Jerzy Weyman explains the connection, originally used by Lascoux in the 70s to find the resolutions in characteristic 0 (modulo a minor gap long since filled), in his book. As far as I know, the most recent reference on positive characteristic resolutions is still the semi-posthumous paper of Buchsbaum and Rota about all their various not particularly successful attempts. This version of (part of – if i remember correctly not all the line bundles are needed) the problem is a little easier to approach in that one can quite easily get an answer from your favourite commutative algebra package in small but nontrivial examples, though it seems highly unlikely that one can find much of an answer by staring at the computable examples. 115. emerton - September 2, 2009 In reply to Anon (comment 113), Until such time as the post you requested appears, you might like to look at Mumford’s wonderful book “Lectures on curves on an algebraic surface”, or at least the introduction. You could also read Mumford’s article “Picard groups of moduli problems”, or again, at least the introduction. Without leaving the confines of Hartshorne’s book, you could read his discussion of the Hilbert scheme, or of the Jacobian of a curve, or his proof (which is Grothendieck’s proof) of the theorem on formal functions, and its corollaries: Stein factorization and the Zariski connectedness theorem. The first two illustrate Grothendieck’s conceptualization of moduli problems in functorial terms, and the latter illustrates the way Grothendieck uses the interplay of schemes, formal schemes, and cohomology to make arguments. Although the latter predates Grothendieck, the theorem on formal functions and the connectedness theorem were among the most difficult results of pre-Grothendieckian algebraic geometry, and the way they get reformulated and reproved from his point of view is worth understanding. One more suggestion: in Mumford’s red book, he discusses Zariski’s main theorem, and how Grothendieck ultimately reinterpreted and generalized it; this is another good illustration of the power and focus of Grothendieck’s view-point. 116. Jim Humphreys - September 3, 2009 Not strictly a request, but some of you may take an interest (morbid or otherwise) in the ICM 2010 speakers lists just published. The selection process emphasizes “international”, since it would be easy to fill up the program with people currently based in the US, but in any case it’s an interesting cross-section of people who do know something about their subject. (In case you are planning to attend the congress in Hyderabad, take the visa requirements seriously. I was once supposed to go to a conference there and followed all the rules, but my passport with visa came back too late. Armand Borel once tried to expedite a visa by taking his passport to the NYC consulate, where it was tossed onto a huge pile of other passports awaiting action.) Caveat emptor. http://www.icm2010.org.in/speakers.php 117. Why the ICM is not as good an idea as it might sound « Secret Blogging Seminar - September 3, 2009 [...] an idea as it might sound September 3, 2009 Posted by Ben Webster in conferences. trackback Jim Humphreys’s comment reminded me of one of my rants that has yet to be bloggified. The non-hyperbolic title is above; [...] 118. Sue Sierra - September 8, 2009 I just arrived to begin a postdoc at Princeton, and have been discussing with some of the other junior faculty the possibility of an informal trans-disciplinary seminar among postdocs in the department. I’d love to see what the writers and readers of this blog think are the important ingredients of such a seminar. Since the goal would be to have it be fairly broad, it couldn’t be a research seminar as such. I want to have it mainly as a way to build community and to foster new mathematical and personal interactions. But I also see other benefits, such as more practice in talking about your work to a general audience before having to do it on the job market. I’ve been told that a few years ago MIT had something called the “undistinguished seminar” which the postdocs there at the time found to be a lot of fun. What made it work? Are there other models out there? We have already decided that a bar outing after the seminar meets is an important ingredient. Other ideas? Thanks! Sue 119. Successful Researcher - October 5, 2009 This is not exactly a request… I just learned that Israel Gelfand is no longer with us. R.I.P. 120. Bruce C. Meyer - November 11, 2009 In trying to construct a set of all the real numbers from 0 to 1, I was able to create an infinite set which appears as having recursive, fractal-like nature, so that between any two specified numbers in the set, I could specify an infinite set of further numbers that are, recursively, mirror images of the previously specified ones. So far, this is all expected, yes yes of course. This set is what Cantor’s diagonal predicts, I understand. A problem I wonder about is that this infinite set ought to be arranged into a sequence, and from that sequence a Cantor’s diagonal ought to be constructed, and from that diagonal, we ought to find a number that is not already specified. But I already showed that between any two numbers, there is a specified infinite set of all the smaller numbers that lodge between those. That is, the diagonal so constructed is already acknowledged or recognized, theoretically. The way out of this difficulty would be to say that the exhaustive list could not be arranged in order of the counting numbers (as Cantor assumed?); or that a diagonal could not be constructed (as assumed?); or both. I have written my speculations in more rigorous form than what I just told you. Would anyone like to help me sort this out? Either I’m making a simple error or I’m overthrown the nondenumeracy principle, which seems unlikely. 121. Scott Carnahan - November 11, 2009 Bruce, your error seems to be in assuming that the ordering of numbers in the list has anything to do with the ordering the set inherits from the real numbers. Cantor’s diagonal argument doesn’t say anything about a new number being between a fixed pair of numbers. It simply says that if you have ANY countable set of real numbers, there will be a real number that is not in the set. 122. Bruce C. Meyer - November 11, 2009 By ‘countable’, do you mean finite? The key to my formulation was that for any finite power expansion of my basic set, (details available), that there would be an easily discoverable real number not in it; but that if the set were raised to one higher power, that new set would include the exception. This would be true for all finite power expansions of the basic set. So I ask, what if we make the power “infinity” rather than any finite n? The counter-proof number would have to be included because infinity plus one is still just infinity. But it looks like what you are saying is that when the power is infinity, that the set is not countable. Which is what I was speculating–that is, this is what I meant when I said that an exhaustive lives could not be arranged in a countable fashion. So what I came up with is a procedure for generating a non-countable set of real numbers. In which case Cantor is not making a claim against it. The basic set is (.0 .1, .2, …, .9), with ten members to one decimal place. If we ask if there are any more real numbers that exist, we merely go to the hundredths place. So the set is expanded by a power of n+1. If n=1, there are ten members, if n-2, there are 100 members, and so forth. The exception would be at a power of one greater than the existing finite power n. So I supposed that if n were infinity, that there would not be an exception number, that all the nonrepeating and repeating numbers would be in this set, that the set would be fully dense, and no diagonal counterproof could be constructed. I was supposing however that the members of this set could be arranged to correspond to the counting numbers, and a diagonal constructed. It seems then that the set I specified (where n-power is infinity) is not, in fact countable. I’m a little hazy on what makes an infinite set countable or not countable. Did I understand you right? 123. Scott Carnahan - November 11, 2009 Bruce, I am having difficulty understanding what you are saying, in part because your use of terminology does not conform to the conventions of modern mathematics. As far as I can tell, you are constructing the set of all decimal representations of all real numbers between 0 and 1, and Cantor proved that this set is uncountable. If you are having trouble with the concept of countable set, I think it would be best if you were to read about it, and familiarize yourself with standard techniques for proving countability and uncountability, before attempting to find contradictions in set theory. 124. Charles Siegel - November 11, 2009 My thought is that his problem lies in the same place as something that went on the arxiv awhile back. The number of paths through the tree he’s constructing isn’t countable, the finite paths are, and those only give the reals with finite decimal expansion. There is a bijection of the reals with the paths through a tree branching ten times at each node, but that tree isn’t countable. 125. Bruce C. Meyer - November 12, 2009 Thanks to Scott and Charles. If after this message, I still am missing something about countable sets (or other points), I promise to go read up on it. But Charles seems to have gotten my point. The tree is infinite in extension, and the finite paths are all and only those paths that terminate at zero, i.e., out of every branching, only one of ten would give rise to a repeating zero (or any other numeral) in every subsequent branching. In this way the infinite tree allows for every terminating, nonterminating, repeating and nonrepeating decimal. I understand *this set*, represented by the entire infinite tree, to be the entire real number set from 0 to 1 (inclusive, since 0.000… and 0.999… are two included branches). From what I can gather, the numbers of this tree are a set of higher cardinality than the counting numbers. In which case, no contradiction of set theory is indicated. Here’s the big but: As Charles touched on, my interest was in representing the non-finite decimal expansions that would lie between any arbitrarily chosen finite decimal expansions. It is this group of non-terminating and nonrepeating numbers that exist in the infinite branching tree, and that I want to be part of the construction of a Cantor’s Number. A sequence of decimals could be constructed by this tree, upon which a non-appearing diagonal ought to be constructed. The first ten items are 0.000…, .1000, .2000, … .9000, followed by the next branching of 0.000…, 0.01, 0.02, thru 0.99, followed by the next branching at the thousandths place. Those paths that terminate with an ever-repeating zero could be entered merely once, which is one out of every ten at every iteration. There seems no theoretical hindrance to stacking the ever greater-sized sets into a list, upon which a diagonal could be constructed. At any arbitrary point, the stacked branches are countable, *but the entire infinite expansion of branches ought to be countable* as well. The number of branches to be sequenced is itself infinitely large, and a larger infinity than the counting numbers (as far as I can tell). But even so, it seems to me that this entire sequence of stacked branches could be set into a one-to-one correspondence with the counting numbers. From which a Cantor’s Diagonal could be constructed, which would not appear in the stack itself. Even though the stack itself is exhaustive of all decimals from 0.0 to 0.9…. 126. Charles Siegel - November 12, 2009 You’re missing the point. It’s not exhaustive. The nodes of the tree are in one-to-one correspondence with finite expansions. The things that are in correspondence with the reals are the PATHS from the root all the way up the tree. 127. Bruce C. Meyer - November 12, 2009 Do you mean that the set of all paths, an infinite set, is not exhaustive? You said, Each path represents a decimal expansion, which include all the reals. I’m pointing out that there are infinitely many paths, the number of paths being a function of the number of nodes, which are themselves a function of the decimal place out to the right, which themselves correspond to the infinitely many counting numbers. These paths include all the nonterminating and nonrepeating decimals, plus the repeating, plus the finite (terminating) decimals. These paths may be stacked. From this stack, a Cantor Diagonal be drawn. Isn’t that what I said? Each node denotes nine new decimal additions to the previous number and one additional zero, which indicates the termination of the previous decimal expansion. There are infinitely many nodes going out (covering the nonterminating numbers), and the nodes are themselves items of a set corresponding to the next-decimal-place out. There are infinitely many sets of nodes going out, corresponding to the decimal places 1, 2, 3…, each node having ten members (one member of the terminal branch and nine unique decimal additions to the previous decimal on the left side of the node), and the number of nodes going down (10 at the first, 100 at the second) is a function (ten to one) of the n-number of the nodes on the right. Between any two adjacent branches, at any given node, there are no decimal numbers that can fit between them, except for the decimal expansions to come, on the right; of which there are infinitely many decimal expansions which exist on the right. The problem (for me) now becomes, can the infinitely many paths (=branches) be stacked? and if stacked, can a Cantor’s Diagonal be constructed? 128. Charles Siegel - November 12, 2009 No, the infinity of paths cannot be put into bijection with the naturals. That’s the point. The number of paths of length $n$ is $10^n$. So the paths with length $\mathbb{N}$ are of cardinality $10^\mathbb{N}$, the cardinality of the power set of the naturals, which, by diagonalization, cannot be counted. 129. Bruce C. Meyer - November 12, 2009 Thank you. Your explanation is coherent, and you’ve pointed me in the right direction, i.e., to study these concepts more properly. I did understand that the power set of the natural numbers is of a higher cardinality than the naturals themselves–but I didn’t think that the constructed number tree was it. My version of the Power Set wore glasses and came from Smallville–it didn’t look like the tidy construction I ran across in the textbooks. I stopped by a mathematician at my local college (MIT, lol) (I’m outside of Cambridge, MA), to run an earlier version of the theory by him, and he said that I proved, by a back door, what Cantor did directly. I didn’t understand the details but felt too embarrassed to take up more of the good professor’s time. 130. Charles Siegel - November 12, 2009 Well, it’s not quite the power set that’s showing up, just something with the same cardinality (ok, so I guess we can’t legitimately tell them apart in some senses, but anyway…). To read further, any book on set theory will have a section on cardinality, and I know that a few are available online, or from Dover (and thus, are cheap). 131. Bruce C. Meyer - November 18, 2009 One more smallish concern/question/problem. Cantor constructed a decimal list based on a grid of all the numerators of the natural numbers moving to the right, and the denominators likewise taken from each of the natural numbers moving down. This gives us every possible ratio, 1/1, 1/2, 1/3…2/1, 2/2, 2/3, … From this grid, we eliminate all the duplications, and move diagonally, we can construct an exhaustive list of decimal numbers. From this exhaustive list, we then construct the Cantor’s Diagonal, which proves that there are more numbers even yet. Why isn’t Cantor’s grid the same cardinality as my tree, and as such, the same cardinality of the Power Set? I surmise that if we restrict the construction of the Cantor grid to ratios that have a numerator less than or equal to the denominator, we should have a result identical to my tree, except written in a different order. There is no Cantor diagonal available for my tree, being of greater cardinality than the natural numbers. Yet the tree should exactly duplicate the content of the grid I describe here with the restriction on numerators (preceding paragraph), which DOES provide a Cantor diagonal. Any ideas? 132. Charles Siegel - November 18, 2009 No, that list gives precisely the rational numbers. It doesn’t include $\pi$. And it is the same cardinality as your TREE. You’re ignoring the fact that the real numers aren’t points in your tree, but PATHS IN YOUR TREE. 133. Bruce C. Meyer - November 18, 2009 No, I’m insisting that the paths are the numbers. The points in the tree would be place markers for finitely terminated numbers. The understanding that I’m working with–hidden assumption?–is that a decimal is a fancy way of writing numbers, be they rational or irrational. An irrational such as pi is 3+(1/10) + (4/100) +… and though it may not be terminal, it is the relationship of so many digits in the numerator and so many places in the denominator. That is all that it means to WRITE something as a decimal expansion. And we write irrational numbers as decimal expansions with exactly that understanding. The paths in the tree include all the irrationals. The path of pi would exist. But I do see that the tree has this advantage over the Cantor Grid: irrationals appear in the tree via their paths, but in order for an irrational to appear in the grid, there would have to be a determinate (i.e., terminated) numerator over (for a decimal) 10 to power (appropriate to the numerator). Pi has no final digit, hence cannot be represented in Cantor’s Grid, but it does have a determinate path in my tree, so it IS represented therein. I’ve been insisting, up to this point, that a decimal expansion of any kind would be of the form (numerator to n places)/(denominator to n+1 places), which would make it a ratio. I thought–and still don’t see why not–that for an irrational number to exist as itself, it has to have a determinate content, SOME specific numerator over SOME specific denominator; and could NOT be “roughly” something. If it IS something, it is NOT anything else. I thought that “something” has to be a ratio of numerator to denominator, determinate yet undiscoverable. Determinate meaning “it is what it is, and not something else.” 134. Peter McNamara - November 18, 2009 Here is my request. I would like to be able to search the arxiv for all papers citing any given paper. 135. Scott Morrison - November 18, 2009 @Peter: citebase.org does a reasonable job of this, at least within the arxiv world. 136. David Speyer - November 18, 2009 @Peter Also, MathSciNet offers this option. Of course, the citing and cited paper must be in MathSciNet and the citing paper must be fairly recent. But, within those parameters, I find this extremely useful. See the “from references” link in the upper right. 137. Ben Webster - November 18, 2009 @Peter: Google Scholar is my go-to application for this. It’s not perfect, but it’s the most complete I know. 138. Charles Siegel - November 18, 2009 Bruce: there are more paths than nodes in your tree. That’s the point. But at this point you’re repeating yourself, and so am I, because you’re not listening to me. $\pi$ is NOT a node in your tree, but it IS a path in your tree. The PATHS of your tree are in correspondence with real numbers, yes. But the NODES, which are the things that are in correspondence with the naturals, are not. Thus, you do not have a counterexample to Cantor. Period. Full stop. There was a crackpot paper on the arxiv a bit over a year ago, it’s here. You’re making EXACTLY the same argument, and EXACTLY the same mistake. Just to make sure you don’t leave convinced that it’s right, here’s some info: It’s been posted for a year, and has zero citations. Also, it received some blog attention to be defeated thoroughly, I’m just going to link to Good Math, Bad Math, where Mark did a MUCH more thorough job explaining why this approach fails than I can in comments. 139. Bruce C. Meyer - November 18, 2009 You’re misreading me or I’m mistating myself, because *you persuaded me*. Hard to believe in academics, but you won me over by the clarity of your argument. I actually stated in my own words why you are right, in the second half of my statement. Perhaps confusing was my description of an irrational as the sum of ratios, i.e., an irrational can be read from left to right as one ratio added to another ad infinitum, but cannot be stated as a single ratio, because there is no termination. I stated my misconception first (in medieval disputation format) and proceeded to show its inadequacy, by restating *your* clear argument. Perhaps interesting to those with a bent to metaphysics, this discussion is similar to one about the possibility of a physical infinity of material space and objects in space. Why was I confused? It is because when I first learned these concepts, they were presented as “received truths” to be heard and learned and integrated into the rest of knowledge. That’s fine, but many more connections need to be made, e.g., these here, before the knowledge gained becomes usable. Howard Gardner reports testing a set of newly-graduated Physics majors and asking them physics questions framed in ordinary, non-technical language. They would inevitably give the common sense, Aristotelian-physics answer rather than what they learned in theoretical physics. (A typical question is, if you drop a rock out of a moving car or airplane, where will it land?) Thank you, my friends. 140. Andrew Stacey - November 19, 2009 @Peter: You can. For example, your paper 0211126 has been cited in 0502278. This probably won’t surprise you, but hopefully it will surprise you that I was able to find this out in about 20 seconds. 141. Peter J McNamara - November 19, 2009 @135,136,137 Thank you for your useful suggestions. I have used mathscinet in the past for this, and the question was prompted by a desire to find more recent preprints than appear in mathscinet. I’ve had a quick look at citebase and google scholar and can’t yet see one service as being obviously better. I wonder how these services are produced, because they have data from arxiv preprints, but the “search for all papers citing this one” does not get to the most recent arxiv postings. 142. Peter J McNamara - November 19, 2009 @Andrew: I didn’t actually write 0211126. I’d like to apologise to Peter R W McNamara and anyone confused about my identity for failing to put any distinguishing middle initial in my comments on this or other blogs to date. 143. Ben Webster - November 19, 2009 I know of particular cases where citebase has missed references to my papers. Google Scholar I haven’t had that problem with, though it does pick up some things which are not papers. 144. Andrew Stacey - November 20, 2009 @Peter: Okay, so I didn’t do a thorough check on names! However, I hope you got my point that it is possible to take someone’s name, look up their papers on the arxiv, and then look up which papers on the arxiv cite those. The fact that I could do that in about 20s and clearly know absolutely nothing about the people or papers concerned should at least tell you something! However, clearly trying to give a “teaser” didn’t work. The arXiv now has a “full text” search facility. It’s labelled as “experimental” but as it goes from the direct data, it’ll be more up-to-date than citebase, and as it is just the arxiv data, it’ll be more reliable than google scholar. To use it, click “Advanced Search” from an arXiv page. 145. porton - November 22, 2009 Could you post a link to this my blog post with an interesting lattice theoretic conjecture to raise the attention of mathematicians to that problem: http ://portonmath.wordpress.com/2009/11/02/exposition-complementive-filters/ 146. Scott Morrison - November 22, 2009 @porton 145: No, sorry. In particular, we don’t link to crackpots, and you’ve already scored 20 points. Rule 21, in particular. 147. Thomas Sauvaget - November 24, 2009 Hello, do you guys feel like providing some stats on MathOverflow? It’s been running for roughly two months now, so there are probably interesting trends emerging. For instance I’ve counted just now 525 “active” users (i.e. those strictly above the basic 11 points, so with at least one upvote). Questions one might ask: is that number what you expected, how did that evolve, how many more will join at this rate, what’s the ratio between senior folks and grad students, is it easy enough to locate previously answered questions, and so on…? That would be a great discussion to have IMHO, cheers. 148. Jean Delinez - December 8, 2009 Does anyone have an opinion on Alain Badiou’s use of set theory? Is there anything interesting mathematically there? Also could anyone shed any light on the comment in the Wikipedia article link text that says: “This effort leads him, in Being and Event, to combine rigorous mathematical formulae with his readings of poets such as Mallarmé and Hölderlin and religious thinkers such as Pascal.” 149. Ben Webster - December 8, 2009 I’m not impressed by what a little Googling shows, but I tend to be unimpressed by philosophers. 150. john mangual - December 16, 2009 Perhaps someone would like to talk about the Etale topology and why it’s so much cooler than the Zariski topology? 151. Bruce C. Meyer - December 16, 2009 Please indulge me with this. What would the reciprocal of an irrational number be? That is, if we can’t specify pi minus 3 to a finite conclusion, say; and all real numbers must have reciprocals; how can we specify its reciprocal? Thanks. 152. David Speyer - December 17, 2009 Let’s say your real number is specified as the limit of a sequence of rational numbers. For example, $\pi-3$ is the limit of $1/10$, $14/100$, $141/1000$ etcetera. Then $1/(\pi-3)$ is the limit of the sequence of reciprocals: $10$, $100/14$, $1000/141$, etcetera. 153. eagle - January 7, 2010 Hi, i just started graduate study after a year of work, may i ask this simple question. Why is it that if the continuous functions fn, defined as fn(x) = n if 0<x<1/n and fn(x) = 0 elsewhere (so fn(x) approaches o for every x). Why is it that the integral S of fn(x) from 0 to 1is equal to 1. I'm quite confused of the exact reason, since fn(0) =1 and fn(1)=0 too. Although the lim fn(x) before between zero to before 1, seems to be 1. Sorry, just missing some of my calculus lessons perhaps. Many Thanks in advance for your response & advice. :-D 154. jo - January 7, 2010 because $\frac{1}{n}\cdot n=1$ :) and that’s the area under the graph of the function on the only subset of $[0,1]$ where it is nonzero, i.e. the integral between 0 and 1 155. eagle - January 9, 2010 Thanks Jo!:-), that’s generous of you. I’ll go through integral cal again :) 156. Sutanu Roy - February 6, 2010 Hi, spaces from categorical approach. 157. Roman B - February 17, 2010 In his lecture on “how to teach differential equations” Rota writes: “Morse theory is a chapter of topology which grew out of Sturm-Lioville theory.” Can someone expand this? 158. ABCD - February 23, 2010 This might be a strange request but here it goes: Am I the only one wishing there were special symbols for “open” resp. “closed subset”? Suppose # was the “open subset” symbol, then one would simply write “thus U # X” or “Let U # X” instead of “thus U is an open subset of X” or “Let U < X be an open subset". I guess I am a bit lazy but if I had a cent for every time a mathematician has to specify that a subset is in fact an open (resp. closed) subset… Maybe a blog post to see how people would feel about introducing such symbols (and finding someone who knows how to create such latex symbols)? I was thinking of something in line with this (one could also do separate for proper (open/closed) subsets): http://img534.imageshack.us/img534/3617/subsets.png 159. Konrad Voelkel - March 11, 2010 Could you discuss the question how to connect pure maths (maybe especially algebraic geometry) more to the sciences? There are some obvious (physics) and some less obvious (machine learning, computer science) connections but most people don’t know much about these and I guess there are actually a lot more. And if there are no other connections, it would be very interesting to discuss how to make new connections. This would give pure mathematicians the feeling to do something immediately useful, not only in the far future. At the same time it might be useful to get funding. One approach to discuss this is found here: http://a.freshbrain.com/solvr/d/howto-connect-pure-math-sciences but I’d prefer a distinguished blog like yours for the discussion. Maybe some of you know something about the current status of “applied algebraic geometry” and the comments will bring to attention the less obvious “applications”. If you know some other place on the WWW for this question, let me know. In general, I’ve asked this meta-question here: http://meta.mathoverflow.net/discussion/276/whats-the-right-place-for-opinion-questions/ 160. Charles Siegel - March 11, 2010 Konrad, you might be interested in my old post (which might get redone better in the next year or so) on a connection with biology. 161. GS - March 12, 2010 A post on the Torelli problem and its current status would be nice. 162. GS - March 12, 2010 A review on popular math books. Including the recent “Logicomix”. 163. Charles Siegel - March 12, 2010 I don’t know how Hodge theoretic the people here are. Which version of the Torelli problem are you thinking about? I’m meandering in that direction over at rigtriv (about to start through Andreotti’s proof for Jacobians of Curves) 164. GS - March 13, 2010 @#163. I first heard of the Torelli problem together with the question when is a given Abelian variety the Jacobian of a curve. I suppose what I want is an overview taking off right after the construction of Jacobian of a Riemann surface, but more detailed than what is available at Springer Online: http://eom.springer.de/T/t093260.htm 165. Charles Siegel - March 13, 2010 GS, that question is the Schottky problem, not the Torelli problem. A Torelli theorem is one that says roughly that you can recover a variety from its Hodge theory. For a quick reference, if you can get your hands on LNM 1337, Donagi wrote a survey of the Schottky problem that’s pretty understandable. For Torelli for curves and their Jacobians (the statement being that the map taking a curve to its Jacobian is injective) I’m going to post in that direction starting this week, and talk a bit about things like the Riemann Singularity Theorem and Theta divisors before doing Andreotti’s proof. 166. GS - March 13, 2010 Oops, it seems I have completely gone off the rocks. Yes, it is the Schottky problem that I was having in mind. Sorry for the complete screw-up. But Torelli also will be nice, now that I actually read the Springer online link more carefully… 167. Charles Siegel - March 14, 2010 I’m heading in that direction over at Rigorous Trivialities, though if you’re in more of a hurry, take a look at the Donagi survey I mentioned, and for Torelli, the best proof I’ve seen (Andreotti’s) is written up in Griffiths and Harris. 168. math_student - June 3, 2010 It appears that V.I. Arnold has died today, 3 june 2010, a sad news indeed. RIP. Source: http://www.les-mathematiques.net/phorum/read.php?2,602224,602224#msg-602224 169. GS - July 10, 2010 I find that at Terry Tao’s blog one can vote up/down comments according as how one likes it. If that were enabled here …. 170. Jim Humphreys - August 12, 2010 It might be of interest here to update at times, as well as comment on, what is or isn’t available online from journals and such. The problematic issue is what sources are restricted to subscribers only (often through university libraries). This is an evolving landscape to deal with and gains in importance as a refereed follow-up to informal arXiv preprints or stuff posted on homepages. In e-math today the AMS is announcing free online access to all their journal archives (now scanned): http://e-math.ams.org/home/page 171. anonymous - November 7, 2010 Just wanted to bring to your attention the plans to shut down the Erwin Schroedinger Institute in Austria: http://prof-yura.livejournal.com/487154.html 172. P. Karambelas - November 11, 2010 News of the death of Peter Hilton (6 Nov.), in case anyone here wants to comment on his life or work. http://www.legacy.com/obituaries/pressconnects/obituary.aspx?n=peter-hilton&pid=146500873 173. bbischof - December 12, 2010 This is not a topic, but a request none-the-less. Can you add functionality so that each blog post can be downloaded as a pdf? I would like to read some posts fairly seriously, and this is much more convenient if they are in a pdf(available on several devices, can be used offline, and more ascetically pleasing personally). I expect that this should not be difficult to implement, but I could be wrong. 174. John Mangual - January 23, 2011 I’m requesting some blogs on Knutson-Tao honeycombs. They solve a question about Hermitan matrices, but there’s a tropical version as well that seems to have been useful. 175. Jim Humphreys - January 26, 2011 This is basically a request for the help of bloggers or others who are in touch with people currently using MathJobs to apply for postdoc positions; there is significant confusion about two openings listed by UMass and the Five Colleges (including UMass). The UMass Dept. of Mathematics & Statistics hopes to hire some recent Ph.D.’s as in the past under multi-year appointments labelled *Visiting Assistant Professor* (VAP). As usual decisions are likely to be very late, but such people are always needed to fill gaps in undergraduate teaching and enhance research life. The Five Colleges have also advertised a POSTDOC in *statistics* only. We’re told that some math applicants are mistakenly checking that box and may get overlooked for math VAP positions. Contact active UMass faculty for further information. 176. A public service announcement « Secret Blogging Seminar - January 26, 2011 [...] the request of Jim Humphreys, I’m making a little PSA about postdoctoral positions at UMass: there seems to have been some [...] 177. Samuel Hansen - June 15, 2011 My name is Samuel Hansen and I am responsible for the math podcasts Combinations and Permutations and Strongly Connected Components over at ACMEScience.com and am the co-host of the Math/Maths podcast. I recently started a Kickstarter called Relatively Prime: Stories from the Mathematical Domain. From the project description: “Relatively Prime will be an 8 episode audio podcast featuring stories from the world of mathematics. Tackling questions like: is it true that you are only 7 seven handshakes from the President, what exactly is a micromort, and how did 39 people commenting on a blog manage to prove a deep theorem. Relatively Prime will feature interviews with leaders of mathematics, as well as the unsung foot soldiers that push the mathematical machine forward. With each episode structured around topics such as: The Shape of Things, Risk, and Calculus Wars, Relatively Prime will illuminate each area by delving into the history, applications, and people that underlie the subject that is the foundation of all science.” I was hoping that the Secret Blogging Seminar could help me get the word out about the project as I think that is is right up your alley. 178. nbdy - August 26, 2011 Hi, I was wondering if you were willing to start a new blog post discussing the recent opinion piece by Garfunkel & Mumford in the NYT http://www.nytimes.com/2011/08/25/opinion/how-to-fix-our-math-education.html?_r=2 That’s worth discussing in the math bloggosphere I would think (especially if Gowers’ latest blog post is about what I think it is about…) 179. Peter McNamara - November 28, 2011 We know that MO provides public database dumps. There are many important maths blogs on the wordpress software. What in terms of data backups does wordpress allow? Is sbseminar archived in usable form elsewhere? 180. Scott Morrison - November 28, 2011 @Peter, WordPress provides a downloadable xml backup. You can see the latest dump at http://tqft.net/sbseminar/ 181. amnestystudentiran - April 14, 2012 Hi I have a question what is the symplectic leaves of sphere of dimension n when n>2 182. David Speyer - April 21, 2012 I don’t know, but it would be useful if you said what Poisson form you are using. 183. S. Hutchinson - July 19, 2012 Any explanation on the tiling of Riemann surfaces with connection to an equivalence relation on Poincare disk. 184. joshkornbluth - September 22, 2012 As a monologuist who flunked calculus at Princeton, might I immodestly (and self-servingly) draw your attention to the new DVD of “The Mathematics of Change,” a one-man show I performed at the Mathematical Sciences Research Institute? By the way, the website — http://themathematicsofchange.com — has videos of interviews I did with two MSRI mathematicians, on the burning questions of (a) why 0.999… equals 1 and (b) why zero to the zero power is “undefined.” … Rock on, guys! 185. Dom Preston - October 5, 2012 Can I suggest taking a look at this recent debate from the UK on the philosophy of mathematics – http://www.iai.tv/video/pythagoras-dream It features a physicist (Lee Smolin) and a couple of philosophers (Hilary Lawson & Peter Hacker) discussing whether or not mathematics is the underlying structure of reality. Worth a watch at least. 186. Jim Humphreys - November 25, 2012 Concerning the Elsevier boycott and related matters, it may be worthwhile to raise the issue of unsolicited “registration” of potential reviewers by journals. I just received an apparently authentic email from the European J. of Combinatorics (Elsevier) stating that I was now registered with them via Elsevier. On the contrary, I have repeatedly told other Elsevier journal editors that I am unwilling to work for their corporation. (One manager wrote back angrily that I was trying to destroy the particular journal.) These automated mailing systems have no opt-out provision, though they do provide a password if you want to visit your “account” with them. To me it all seems unethical. But how to stop it? 187. Jim Humphreys - November 25, 2012 P.S. A minor procedural suggestion: if these requests are of any value, it would help to post the most recent ones first. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 109, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391724467277527, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-challenge-problems/94180-find-out-fallacy.html
# Thread: 1. ## Find out fallacy FInd out fallcy in the proof. This "proof" will attempt to show that all people in Canada are the same age, by showing by induction that the following statement (which we'll call "S(n)" for short) is true for all natural numbers n: Statement S(n): In any group of n people, everyone in that group has the same age. The conclusion follows from that statement by letting n be the the number of people in Canada. The Fallacious Proof of Statement S(n): * Step 1: In any group that consists of just one person, everybody in the group has the same age, because after all there is only one person! * Step 2: Therefore, statement S(1) is true. * Step 3: The next stage in the induction argument is to prove that, whenever S(n) is true for one number (say n=k), it is also true for the next number (that is, n = k+1). * Step 4: We can do this by (1) assuming that, in every group of k people, everyone has the same age; then (2) deducing from it that, in every group of k+1 people, everyone has the same age. * Step 5: Let G be an arbitrary group of k+1 people; we just need to show that every member of G has the same age. * Step 6: To do this, we just need to show that, if P and Q are any members of G, then they have the same age. * Step 7: Consider everybody in G except P. These people form a group of k people, so they must all have the same age (since we are assuming that, in any group of k people, everyone has the same age). * Step 8: Consider everybody in G except Q. Again, they form a group of k people, so they must all have the same age. * Step 9: Let R be someone else in G other than P or Q. * Step 10: Since Q and R each belong to the group considered in step 7, they are the same age. * Step 11: Since P and R each belong to the group considered in step 8, they are the same age. * Step 12: Since Q and R are the same age, and P and R are the same age, it follows that P and Q are the same age. * Step 13: We have now seen that, if we consider any two people P and Q in G, they have the same age. It follows that everyone in G has the same age. * Step 14: The proof is now complete: we have shown that the statement is true for n=1, and we have shown that whenever it is true for n=k it is also true for n=k+1, so by induction it is true for all n. 2. The reasoning is flawless except for the fact that it doesn't work when $n=2$, because your person $R$ does not exist : you assume (tacitly!) that $n\geq 3$, which you obviously cannot do for the inductive step from $n=1 \mbox{ to } n=2$. Hence it's flawed for $n\geq 2$. 3. Originally Posted by malaygoel FInd out fallcy in the proof. This "proof" will attempt to show that all people in Canada are the same age, by showing by induction that the following statement (which we'll call "S(n)" for short) is true for all natural numbers n: Statement S(n): In any group of n people, everyone in that group has the same age. The conclusion follows from that statement by letting n be the the number of people in Canada. The Fallacious Proof of Statement S(n): * Step 1: In any group that consists of just one person, everybody in the group has the same age, because after all there is only one person! * Step 2: Therefore, statement S(1) is true. * Step 3: The next stage in the induction argument is to prove that, whenever S(n) is true for one number (say n=k), it is also true for the next number (that is, n = k+1). * Step 4: We can do this by (1) assuming that, in every group of k people, everyone has the same age; then (2) deducing from it that, in every group of k+1 people, everyone has the same age. * Step 5: Let G be an arbitrary group of k+1 people; we just need to show that every member of G has the same age. * Step 6: To do this, we just need to show that, if P and Q are any members of G, then they have the same age. * Step 7: Consider everybody in G except P. These people form a group of k people, so they must all have the same age (since we are assuming that, in any group of k people, everyone has the same age). * Step 8: Consider everybody in G except Q. Again, they form a group of k people, so they must all have the same age. * Step 9: Let R be someone else in G other than P or Q. * Step 10: Since Q and R each belong to the group considered in step 7, they are the same age. * Step 11: Since P and R each belong to the group considered in step 8, they are the same age. * Step 12: Since Q and R are the same age, and P and R are the same age, it follows that P and Q are the same age. * Step 13: We have now seen that, if we consider any two people P and Q in G, they have the same age. It follows that everyone in G has the same age. * Step 14: The proof is now complete: we have shown that the statement is true for n=1, and we have shown that whenever it is true for n=k it is also true for n=k+1, so by induction it is true for all n. Step 9 requires that $k+1\ge 3$ so the induction must start with base case $k=2$ not $k=1$. CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9675949811935425, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/229676/lim-limits-n-to-infty-fracnn1n-with-sandwich-rule?answertab=oldest
$\lim\limits_{n\to\infty}\frac{n^{n+1}}{n!}$ with sandwich rule Determine the result of $$\lim\limits_{n\to\infty}\frac{n^{n+1}}{n!}.$$ I would like to use the sandwich rule for limits to find two sequences which define lower and upper bounds to determine the appropriate limit which is obviously $+\infty$. By now I have an upper bound: $$\frac{n^{n+1}}{n!}=n\cdot\frac{n}{n}\cdot\frac{n}{n-1}\cdot\ldots\cdot\frac{n}{2}\cdot\frac{n}{1}\leq n\cdot 1\cdot n\cdot\ldots\cdot n\cdot n=n^n\longrightarrow+\infty$$ What would you suggest, which sequence should I use for a lower bound which converges to $+\infty$? - 1 Answer As your sequence is positive it suffices to find a lower bound sequence that diverges to infinity: $$\frac{n^{n+1}}{n!}=n\frac{n\cdot n\cdot\ldots\cdot n}{1\cdot 2\cdot\ldots\cdot n}\geq n\frac{n\cdot n\cdot\ldots\cdot n}{n\cdot n\cdot\ldots\cdot n}=n\xrightarrow [n\to\infty]{} \infty$$ - 1 What would you do if it was (n^n)/n! – Adam Rubinson Nov 5 '12 at 13:56 I'd take out $\,\dfrac{n}{1}\,$ from the fraction: $$\frac{n^n}{n!}\geq \frac{n}{1}\frac{n\cdot...\cdot n}{n\cdot\ldots\cdot n}=n....$$ – DonAntonio Nov 5 '12 at 14:05 lol! And if it were (n)^(n-k) / n! (for some fixed k) ? – Adam Rubinson Nov 5 '12 at 17:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9540746212005615, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/04/13/cotangent-vectors-differentials-and-the-cotangent-bundle/?like=1&source=post_flair&_wpnonce=78d053c4d8
# The Unapologetic Mathematician ## Cotangent Vectors, Differentials, and the Cotangent Bundle There’s another construct in differential topology and geometry that isn’t quite so obvious as a tangent vector, but which is every bit as useful: a cotangent vector. A cotangent vector $\lambda$ at a point $p\in M$ is just an element of the dual space to $\mathcal{T}_pM$, which we write as $\mathcal{T}^*_pM$. We actually have a really nice example of cotangent vectors already: a gadget that takes a tangent vector at $p$ and gives back a number. It’s the differential, which when given a vector returns the directional derivative in that direction. And we can generalize that right away. Indeed, if $f$ is a smooth germ at $p$, then we have a linear functional $v\mapsto v(f)$ defined for all tangent vectors $v\in\mathcal{T}_pM$. We will call this functional the differential of $f$ at $p$, and write $\left[df(p)\right](v)=v(f)$. If we have local coordinates $(U,x)$ at $p$, then each coordinate function $x^i$ is a smooth function, which has differential $dx^i(p)$. These actually furnish the dual basis to the coordinate vectors $\frac{\partial}{\partial x^i}(p)$. Indeed, we calculate $\displaystyle\begin{aligned}\left[dx^i(p)\right]\left(\frac{\partial}{\partial x^j}(p)\right)&=\left[\frac{\partial}{\partial x^j}\right](x^i)\\&=\left[D_j(u^i\circ x\circ x^{-1})\right](x(p))\\&=\delta_j^i\end{aligned}$ That is, evaluating the coordinate differential $dx^i(p)$ on the coordinate vector $\frac{\partial}{\partial x^j}(p)$ gives the value $1$ if $i=j$ and $0$ otherwise. Of course, the $dx^j(p)$ define a basis of $\mathcal{T}^*_pM$ at every point $p\in U$, just like the $\frac{\partial}{\partial x^j}(p)$ define a basis of $\mathcal{T}_pM$ at every point $p\in U$. This was exactly what we needed to compare vectors — at least to some extent — at points within a local coordinate patch, and let us define the tangent bundle as a $2n$-dimensional manifold. In exactly the same way, we can define the cotangent bundle $\mathcal{T}^*M$. Given the coordinate patch $(U,x)$ we define a coordinate patch covering all the cotangent spaces $\mathcal{T}^*_pM$ with $p\in U$. The coordinate map is defined on a cotangent vector $\lambda\in\mathcal{T}^*_pM$ by $\displaystyle\tilde{x}(\lambda)=\left(x^1(p),\dots,x^n(p),\lambda\left(\frac{\partial}{\partial x^1}(p)\right),\dots,\lambda\left(\frac{\partial}{\partial x^n}(p)\right)\right)$ Everything else in the construction of the cotangent bundle proceeds exactly as it did for the tangent bundle, but we’re missing one thing: how to translate from one basis of coordinate differentials to another. So, let’s say $x$ and $y$ are two coordinate maps at $p$, defining coordinate differentials $dx^i(p)$ and $dy^j(p)$. How are these two bases related? We can calculate this by applying $dy^j(p)$ to $\frac{\partial}{\partial x^j}(p)$: $\displaystyle\begin{aligned}\left[dy^j(p)\right]\left(\frac{\partial}{\partial x^i}(p)\right)&=\left[\frac{\partial}{\partial x^i}\right](y^j)\\&=\left[D_i(u^j\circ y\circ x^{-1})\right](x(p))\\&=J_i^j(p)\end{aligned}$ where $J_i^j(p)$ are the components of the Jacobian matrix of the transition function $y\circ x^{-1}$. What does this mean? Well, consider the linear functional $\displaystyle\sum\limits_iJ_i^j(p)dx^i(p)$ This has the same values on each of the $\frac{\partial}{\partial x^i}(p)$ as $dy^j$ does, and we conclude that they are, in fact, the same cotangent vector: $\displaystyle dy^j(p)=\sum\limits_iJ_i^j(p)dx^i(p)$ On the other hand, recall that $\displaystyle\frac{\partial}{\partial x^i}(p)=\sum\limits_jJ_i^j(p)\frac{\partial}{\partial y^j}(p)$ That is, we use the Jacobian of one transition function to transform from the $dx^i(p)$ basis to the $dy^j(p)$ basis of $\mathcal{T}^*_pM$, but the transpose of the same Jacobian to transform from the $\frac{\partial}{\partial x^i}(p)$ basis to the $\frac{\partial}{\partial y^j}(p)$ basis of $\mathcal{T}_pM$. And this is actually just as we expect, since the transpose is actually the adjoint transformation, which automatically connects the dual spaces. ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 3 Comments » 1. [...] patch on , we get a basis of for each . Then, just as we did with the tangent bundle and the cotangent bundle we can come up with a coordinate patch “induced by ” on each of our new bundles. In [...] Pingback by | July 6, 2011 | Reply 2. [...] Similarly, we pass from the -coordinate basis to the -coordinate basis of by using another Jacobian: [...] Pingback by | July 8, 2011 | Reply 3. [...] of the coordinate transformation from one patch to the other. Indeed, we use the Jacobian to change bases on the cotangent bundle, and transforming between these top forms amounts to taking the determinant of the transformation [...] Pingback by | August 29, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 57, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9087887406349182, "perplexity_flag": "head"}
http://en.m.wikipedia.org/wiki/Gravitation
# Gravitation "Gravity" redirects here. For other uses, see Gravity (disambiguation). "Law of Gravity" and "Laws of Gravity" redirect here. For other uses, see Law of Gravity (disambiguation). Hammer and Feather Drop - Apollo 15 astronaut David Scott on the Moon recreating Galileo's famous gravity experiment. (1.38 MB, ogg/Theora format). Gravitation, or gravity, is the natural phenomenon by which physical bodies attract each other with a force proportional to their masses, and inversely proportional to the square of the distance between them. It is most commonly experienced as the agent that gives weight to objects with mass and causes them to fall to the ground when dropped. The phenomenon of gravitation itself, however, is a byproduct of a more fundamental phenomenon described by general relativity, which suggests that spacetime is curved according to the energy and momentum of whatever matter and radiation are present. Gravitation is one of the four fundamental interactions of nature, along with electromagnetism, and the nuclear strong force and weak force. In modern physics, the phenomenon of gravitation is most accurately described by the general theory of relativity by Einstein, in which the phenomenon itself is a consequence of the curvature of spacetime governing the motion of inertial objects. The simpler Newton's law of universal gravitation provides an accurate approximation for most physical situations including calculations as critical as spacecraft trajectory. From a cosmological perspective, gravitation causes dispersed matter to coalesce, and coalesced matter to remain intact, thus accounting for the existence of planets, stars, galaxies and most of the macroscopic objects in the universe. It is responsible for keeping the Earth and the other planets in their orbits around the Sun; for keeping the Moon in its orbit around the Earth; for the formation of tides; for natural convection, by which fluid flow occurs under the influence of a density gradient and gravity; for heating the interiors of forming stars and planets to very high temperatures; and for various other phenomena observed on Earth and throughout the universe. ## History of gravitational theory Main article: History of gravitational theory Classical mechanics Branches Formulations Fundamental concepts Core topics Scientists ### Scientific revolution Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and early 17th centuries. In his famous (though possibly apocryphal[1]) experiment dropping balls from the Tower of Pisa, and later with careful measurements of balls rolling down inclines, Galileo showed that gravitation accelerates all objects at the same rate. This was a major departure from Aristotle's belief that heavier objects accelerate faster.[2] Galileo correctly postulated air resistance as the reason that lighter objects may fall slower in an atmosphere. Galileo's work set the stage for the formulation of Newton's theory of gravity. ### Newton's theory of gravitation Main article: Newton's law of universal gravitation Sir Isaac Newton, an English physicist who lived from 1642 to 1727 In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. In his own words, “I deduced that the forces which keep the planets in their orbs must [be] reciprocally as the squares of their distances from the centers about which they revolve: and thereby compared the force requisite to keep the Moon in her Orb with the force of gravity at the surface of the Earth; and found them answer pretty nearly.”[3] Newton's theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the general position of the planet, and Le Verrier's calculations are what led Johann Gottfried Galle to the discovery of Neptune. A discrepancy in Mercury's orbit pointed out flaws in Newton's theory. By the end of the 19th century, it was known that its orbit showed slight perturbations that could not be accounted for entirely under Newton's theory, but all searches for another perturbing body (such as a planet orbiting the Sun even closer than Mercury) had been fruitless. The issue was resolved in 1915 by Albert Einstein's new theory of general relativity, which accounted for the small discrepancy in Mercury's orbit. Although Newton's theory has been superseded, most modern non-relativistic gravitational calculations are still made using Newton's theory because it is a much simpler theory to work with than general relativity, and gives sufficiently accurate results for most applications involving sufficiently small masses, speeds and energies. ### Equivalence principle The equivalence principle, explored by a succession of researchers including Galileo, Loránd Eötvös, and Einstein, expresses the idea that all objects fall in the same way. The simplest way to test the weak equivalence principle is to drop two objects of different masses or compositions in a vacuum, and see if they hit the ground at the same time. These experiments demonstrate that all objects fall at the same rate when friction (including air resistance) is negligible. More sophisticated tests use a torsion balance of a type invented by Eötvös. Satellite experiments, for example STEP, are planned for more accurate experiments in space.[4] Formulations of the equivalence principle include: • The weak equivalence principle: The trajectory of a point mass in a gravitational field depends only on its initial position and velocity, and is independent of its composition.[5] • The Einsteinian equivalence principle: The outcome of any local non-gravitational experiment in a freely falling laboratory is independent of the velocity of the laboratory and its location in spacetime.[6] • The strong equivalence principle requiring both of the above. ### General relativity See also: Introduction to general relativity Two-dimensional analogy of spacetime distortion generated by the mass of an object. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of space but instead represent the coordinate system imposed on the curved spacetime, which would be rectilinear in a flat spacetime. General relativity $G_{\mu \nu} + \Lambda g_{\mu \nu}= {8\pi G\over c^4} T_{\mu \nu}$ Introduction Mathematical formulation Resources  · Tests Fundamental concepts Phenomena Advanced theories Scientists In general relativity, the effects of gravitation are ascribed to spacetime curvature instead of a force. The starting point for general relativity is the equivalence principle, which equates free fall with inertial motion, and describes free-falling inertial objects as being accelerated relative to non-inertial observers on the ground.[7][8] In Newtonian physics, however, no such acceleration can occur unless at least one of the objects is being operated on by a force. Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. These straight paths are called geodesics. Like Newton's first law of motion, Einstein's theory states that if a force is applied on an object, it would deviate from a geodesic. For instance, we are no longer following geodesics while standing because the mechanical resistance of the Earth exerts an upward force on us, and we are non-inertial on the ground as a result. This explains why moving along the geodesics in spacetime is considered inertial. Einstein discovered the field equations of general relativity, which relate the presence of matter and the curvature of spacetime and are named after him. The Einstein field equations are a set of 10 simultaneous, non-linear, differential equations. The solutions of the field equations are the components of the metric tensor of spacetime. A metric tensor describes a geometry of spacetime. The geodesic paths for a spacetime are calculated from the metric tensor. Notable solutions of the Einstein field equations include: • The Schwarzschild solution, which describes spacetime surrounding a spherically symmetric non-rotating uncharged massive object. For compact enough objects, this solution generated a black hole with a central singularity. For radial distances from the center which are much greater than the Schwarzschild radius, the accelerations predicted by the Schwarzschild solution are practically identical to those predicted by Newton's theory of gravity. • The Reissner-Nordström solution, in which the central object has an electrical charge. For charges with a geometrized length which are less than the geometrized length of the mass of the object, this solution produces black holes with two event horizons. • The Kerr solution for rotating massive objects. This solution also produces black holes with multiple event horizons. • The Kerr-Newman solution for charged, rotating massive objects. This solution also produces black holes with multiple event horizons. • The cosmological Friedmann-Lemaitre-Robertson-Walker solution, which predicts the expansion of the universe. The tests of general relativity included the following:[9] • General relativity accounts for the anomalous perihelion precession of Mercury.2 • The prediction that time runs slower at lower potentials has been confirmed by the Pound–Rebka experiment, the Hafele–Keating experiment, and the GPS. • The prediction of the deflection of light was first confirmed by Arthur Stanley Eddington from his observations during the Solar eclipse of May 29, 1919.[10][11] Eddington measured starlight deflections twice those predicted by Newtonian corpuscular theory, in accordance with the predictions of general relativity. However, his interpretation of the results was later disputed.[12] More recent tests using radio interferometric measurements of quasars passing behind the Sun have more accurately and consistently confirmed the deflection of light to the degree predicted by general relativity.[13] See also gravitational lens. • The time delay of light passing close to a massive object was first identified by Irwin I. Shapiro in 1964 in interplanetary spacecraft signals. • Gravitational radiation has been indirectly confirmed through studies of binary pulsars. • Alexander Friedmann in 1922 found that Einstein equations have non-stationary solutions (even in the presence of the cosmological constant). In 1927 Georges Lemaître showed that static solutions of the Einstein equations, which are possible in the presence of the cosmological constant, are unstable, and therefore the static universe envisioned by Einstein could not exist. Later, in 1931, Einstein himself agreed with the results of Friedmann and Lemaître. Thus general relativity predicted that the Universe had to be non-static—it had to either expand or contract. The expansion of the universe discovered by Edwin Hubble in 1929 confirmed this prediction.[14] • The theory's prediction of frame dragging was consistent with the recent Gravity Probe B results.[15] • General relativity predicts that light should lose its energy when travelling away from the massive bodies. The group of Radek Wojtak of the Niels Bohr Institute at the University of Copenhagen collected data from 8000 galaxy clusters and found that the light coming from the cluster centers tended to be red-shifted compared to the cluster edges, confirming the energy loss due to gravity.[16] ### Gravity and quantum mechanics Main articles: Graviton and Quantum gravity In the decades after the discovery of general relativity it was realized that general relativity is incompatible with quantum mechanics.[17] It is possible to describe gravity in the framework of quantum field theory like the other fundamental forces, such that the attractive force of gravity arises due to exchange of virtual gravitons, in the same way as the electromagnetic force arises from exchange of virtual photons.[18][19] This reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length,[17] where a more complete theory of quantum gravity (or a new approach to quantum mechanics) is required. ↑Jump back a section ## Specifics ### Earth's gravity Main article: Earth's gravity Every planetary body (including the Earth) is surrounded by its own gravitational field, which exerts an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body. The strength of the gravitational field is numerically equal to the acceleration of objects under its influence, and its value at the Earth's surface, denoted g, is expressed below as the standard average. According to the Bureau International de Poids et Mesures, International Systems of Units (SI), the Earth's standard acceleration due to gravity is: g = 9.80665 m/s2 = 32.1740 ft/s2).[20][21] This means that, ignoring air resistance, an object falling freely near the Earth's surface increases its velocity by 9.80665 m/s (32.1740 ft/s or 22 mph) for each second of its descent. Thus, an object starting from rest will attain a velocity of 9.80665 m/s (32.1740 ft/s) after one second, approximately 19.62 m/s (64.4 ft/s) after two seconds, and so on, adding 9.80665 m/s (32.1740 ft/s) to each resulting velocity. Also, again ignoring air resistance, any and all objects, when dropped from the same height, will hit the ground at the same time. If an object with comparable mass to that of the Earth were to fall towards it, then the corresponding acceleration of the Earth really would be observable. According to Newton's 3rd Law, the Earth itself experiences a force equal in magnitude and opposite in direction to that which it exerts on a falling object. This means that the Earth also accelerates towards the object until they collide. Because the mass of the Earth is huge, however, the acceleration imparted to the Earth by this opposite force is negligible in comparison to the object's. If the object doesn't bounce after it has collided with the Earth, each of them then exerts a repulsive contact force on the other which effectively balances the attractive force of gravity and prevents further acceleration. The force of gravity on Earth is the resultant (vector sum) of two forces: (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. At the equator, the force of gravity is the weakest due to the centrifugal force caused by the Earth's rotation. The force of gravity varies with latitude and becomes stronger as you increase in latitude toward the poles. The standard value of 9.80665 m/s2 is the one originally adopted by the International Committee on Weights and Measures in 1901 for 45° latitude, even though it has been shown to be too high by about five parts in ten thousand.[22] This value has persisted in meteorology and in some standard atmospheres as the value for 45° latitude even though it applies more precisely to latitude of 45°32'33".[23] ### Equations for a falling body near the surface of the Earth Ball falling freely under gravity. See text for description. Main article: Equations for a falling body Under an assumption of constant gravity, Newton's law of universal gravitation simplifies to F = mg, where m is the mass of the body and g is a constant vector with an average magnitude of 9.81 m/s2. The acceleration due to gravity is equal to this g. An initially stationary object which is allowed to fall freely under gravity drops a distance which is proportional to the square of the elapsed time. The image on the right, spanning half a second, was captured with a stroboscopic flash at 20 flashes per second. During the first 1⁄20 of a second the ball drops one unit of distance (here, a unit is about 12 mm); by 2⁄20 it has dropped at total of 4 units; by 3⁄20, 9 units and so on. Under the same constant gravity assumptions, the potential energy, Ep, of a body at height h is given by Ep = mgh (or Ep = Wh, with W meaning weight). This expression is valid only over small distances h from the surface of the Earth. Similarly, the expression $h = \tfrac{v^2}{2g}$ for the maximum height reached by a vertically projected body with initial velocity v is useful for small heights and small initial velocities only. ### Gravity and astronomy The discovery and application of Newton's law of gravity accounts for the detailed information we have about the planets in our solar system, the mass of the Sun, the distance to stars, quasars and even the theory of dark matter. Although we have not traveled to all the planets nor to the Sun, we know their masses. These masses are obtained by applying the laws of gravity to the measured characteristics of the orbit. In space an object maintains its orbit because of the force of gravity acting upon it. Planets orbit stars, stars orbit Galactic Centers, galaxies orbit a center of mass in clusters, and clusters orbit in superclusters. The force of gravity exerted on one object by another is directly proportional to the product of those objects' masses and inversely proportional to the square of the distance between them. ### Gravitational radiation Main article: Gravitational wave In general relativity, gravitational radiation is generated in situations where the curvature of spacetime is oscillating, such as is the case with co-orbiting objects. The gravitational radiation emitted by the Solar System is far too small to measure. However, gravitational radiation has been indirectly observed as an energy loss over time in binary pulsar systems such as PSR B1913+16. It is believed that neutron star mergers and black hole formation may create detectable amounts of gravitational radiation. Gravitational radiation observatories such as the Laser Interferometer Gravitational Wave Observatory (LIGO) have been created to study the problem. No confirmed detections have been made of this hypothetical radiation, but as the science behind LIGO is refined and as the instruments themselves are endowed with greater sensitivity over the next decade, this may change. ### Speed of gravity In December 2012, a research team in China announced that it had produced findings which seem to prove that the speed of gravity is equal to the speed of light. The team's findings were due to be released in a journal in January 2013.[24] ↑Jump back a section ## Anomalies and discrepancies There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways. Rotation curve of a typical spiral galaxy: predicted (A) and observed (B). The discrepancy between the curves is attributed to dark matter. • Extra fast stars: Stars in galaxies follow a distribution of velocities where stars on the outskirts are moving faster than they should according to the observed distributions of normal matter. Galaxies within galaxy clusters show a similar pattern. Dark matter, which would interact gravitationally but not electromagnetically, would account for the discrepancy. Various modifications to Newtonian dynamics have also been proposed. • Flyby anomaly: Various spacecraft have experienced greater acceleration than expected during gravity assist maneuvers. • Accelerating expansion: The metric expansion of space seems to be speeding up. Dark energy has been proposed to explain this. A recent alternative explanation is that the geometry of space is not homogeneous (due to clusters of galaxies) and that when the data are reinterpreted to take this into account, the expansion is not speeding up after all,[25] however this conclusion is disputed.[26] • Anomalous increase of the astronomical unit: Recent measurements indicate that planetary orbits are widening faster than if this were solely through the sun losing mass by radiating energy. • Extra energetic photons: Photons travelling through galaxy clusters should gain energy and then lose it again on the way out. The accelerating expansion of the universe should stop the photons returning all the energy, but even taking this into account photons from the cosmic microwave background radiation gain twice as much energy as expected. This may indicate that gravity falls off faster than inverse-squared at certain distance scales.[27] • Dark flow: Surveys of galaxy motions have detected a mystery dark flow towards an unseen mass. Such a large mass is too large to have accumulated since the Big Bang using current models and may indicate that gravity falls off slower than inverse-squared at certain distance scales.[27] • Extra massive hydrogen clouds: The spectral lines of the Lyman-alpha forest suggest that hydrogen clouds are more clumped together at certain scales than expected and, like dark flow, may indicate that gravity falls off slower than inverse-squared at certain distance scales.[27] ↑Jump back a section ## Alternative theories Main article: Alternatives to general relativity ### Historical alternative theories • Aristotelian theory of gravity • Le Sage's theory of gravitation (1784) also called LeSage gravity, proposed by Georges-Louis Le Sage, based on a fluid-based explanation where a light gas fills the entire universe. • Ritz's theory of gravitation, Ann. Chem. Phys. 13, 145, (1908) pp. 267–271, Weber-Gauss electrodynamics applied to gravitation. Classical advancement of perihelia. • Nordström's theory of gravitation (1912, 1913), an early competitor of general relativity. • Whitehead's theory of gravitation (1922), another early competitor of general relativity. ### Recent alternative theories • Brans–Dicke theory of gravity (1961) • Induced gravity (1967), a proposal by Andrei Sakharov according to which general relativity might arise from quantum field theories of matter • In the modified Newtonian dynamics (MOND) (1981), Mordehai Milgrom proposes a modification of Newton's Second Law of motion for small accelerations • The self-creation cosmology theory of gravity (1982) by G.A. Barber in which the Brans-Dicke theory is modified to allow mass creation • Nonsymmetric gravitational theory (NGT) (1994) by John Moffat • Tensor–vector–scalar gravity (TeVeS) (2004), a relativistic modification of MOND by Jacob Bekenstein • Gravity as an entropic force, gravity arising as an emergent phenomenon from the thermodynamic concept of entropy. • In the superfluid vacuum theory the gravity and curved space-time arise as a collective excitation mode of non-relativistic background superfluid. ↑Jump back a section ## See also ↑Jump back a section ## Notes • Proposition 75, Theorem 35: p. 956 - I.Bernard Cohen and Anne Whitman, translators: Isaac Newton, The Principia: Mathematical Principles of Natural Philosophy. Preceded by A Guide to Newton's Principia, by I. Bernard Cohen. University of California Press 1999 ISBN 0-520-08816-6 ISBN 0-520-08817-4 • Max Born (1924), Einstein's Theory of Relativity (The 1962 Dover edition, page 348 lists a table documenting the observed and calculated values for the precession of the perihelion of Mercury, Venus, and Earth.) ↑Jump back a section ## Footnotes 1. Ball, Phil (06 2005). "Tall Tales". Nature News. doi:10.1038/news050613-10. 2. Galileo (1638), , First Day Salviati speaks: "If this were what Aristotle meant you would burden him with another error which would amount to a falsehood; because, since there is no such sheer height available on earth, it is clear that Aristotle could not have made the experiment; yet he wishes to give us the impression of his having performed it when he speaks of such an effect as one which we see." 3. *Chandrasekhar, Subrahmanyan (2003). Newton's Principia for the common reader. Oxford: Oxford University Press.  (pp.1–2). The quotation comes from a memorandum thought to have been written about 1714. As early as 1645 Ismaël Bullialdus had argued that any force exerted by the Sun on distant objects would have to follow an inverse-square law. However, he also dismissed the idea that any such force did exist. See, for example, Linton, Christopher M. (2004). From Eudoxus to Einstein—A History of Mathematical Astronomy. Cambridge: Cambridge University Press. p. 225. ISBN 978-0-521-82750-8. 4. M.C.W.Sandford (2008). "STEP: Satellite Test of the Equivalence Principle". Rutherford Appleton Laboratory. Retrieved 2011-10-14. 5. Paul S Wesson (2006). Five-dimensional Physics. World Scientific. p. 82. ISBN 981-256-661-9. 6. Haugen, Mark P.; C. Lämmerzahl (2001). Principles of Equivalence: Their Role in Gravitation Physics and Experiments that Test Them. Springer. arXiv:gr-qc/0103067. ISBN 978-3-540-41236-6. 7. "Gravity and Warped Spacetime". black-holes.org. Retrieved 2010-10-16. 8. Dmitri Pogosyan. "Lecture 20: Black Holes—The Einstein Equivalence Principle". University of Alberta. Retrieved 2011-10-14. 9. Pauli, Wolfgang Ernst (1958). "Part IV. General Theory of Relativity". Theory of Relativity. Courier Dover Publications. ISBN 978-0-486-64152-2. 10. Dyson, F.W.; Eddington, A.S.; Davidson, C.R. (1920). "A Determination of the Deflection of Light by the Sun's Gravitational Field, from Observations Made at the Total Eclipse of May 29, 1919". 220 (571–581): 291–333. Bibcode:1920RSPTA.220..291D. doi:10.1098/rsta.1920.0009. . Quote, p. 332: "Thus the results of the expeditions to Sobral and Principe can leave little doubt that a deflection of light takes place in the neighbourhood of the sun and that it is of the amount demanded by Einstein's generalised theory of relativity, as attributable to the sun's gravitational field." 11. Weinberg, Steven (1972). Gravitation and cosmology. John Wiley & Sons. . Quote, p. 192: "About a dozen stars in all were studied, and yielded values 1.98 ± 0.11" and 1.61 ± 0.31", in substantial agreement with Einstein's prediction θ☉ = 1.75"." 12. Earman, John; Glymour, Clark (1980). "Relativity and Eclipses: The British eclipse expeditions of 1919 and their predecessors". Historical Studies in the Physical Sciences 11: 49–85. doi:10.2307/27757471. 13. Weinberg, Steven (1972). Gravitation and cosmology. John Wiley & Sons. p. 194. 14. See W.Pauli, 1958, pp.219–220 15. ^ a b Randall, Lisa (2005). Warped Passages: Unraveling the Universe's Hidden Dimensions. Ecco. ISBN 0-06-053108-8. 16. Feynman, R. P.; Morinigo, F. B., Wagner, W. G., & Hatfield, B. (1995). Feynman lectures on gravitation. Addison-Wesley. ISBN 0-201-62734-5. 17. Zee, A. (2003). Quantum Field Theory in a Nutshell. Princeton University Press. ISBN 0-691-01019-6. 18. Bureau International des Poids et Mesures (2006). "Chapter 5". The International System of Units (SI). 8th ed. Retrieved 2009-11-25. "Unit names are normally printed in roman (upright) type ... Symbols for quantities are generally single letters set in an italic font, although they may be qualified by further information in subscripts or superscripts or in brackets." 19. "SI Unit rules and style conventions". National Institute For Standards and Technology (USA). September 2004. Retrieved 2009-11-25. "Variables and quantity symbols are in italic type. Unit symbols are in roman type." 20. List, R. J. editor, 1968, Acceleration of Gravity, Smithsonian Meteorological Tables, Sixth Ed. Smithsonian Institution, Washington, D.C., p. 68. 21. Dark energy may just be a cosmic illusion, New Scientist, issue 2646, 7th March 2008. 22. Swiss-cheese model of the cosmos is full of holes, New Scientist, issue 2678, 18th October 2008. 23. ^ a b c "Gravity may venture where matter fears to tread", Marcus Chown, New Scientist issue 2669, 16 March 2009. Original site, charges money to read it: http://www.newscientist.com/article/mg20126990.400-gravity-may-venture-where-matter-fears-to-tread.html . Mirror site, free to read article: http://www.es.sott.net/articles/show/179189-Gravity-may-venture-where-matter-fears-to-tread ↑Jump back a section ## References • Halliday, David; Robert Resnick; Kenneth S. Krane (2001). Physics v. 1. New York: John Wiley & Sons. ISBN 0-471-32057-9. • Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7. • Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4. ↑Jump back a section ## Further reading Look up gravitation in Wiktionary, the free dictionary. Wikimedia Commons has media related to: Gravitation • Thorne, Kip S.; Misner, Charles W.; Wheeler, John Archibald (1973). Gravitation. W.H. Freeman. ISBN 0-7167-0344-0. ↑Jump back a section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8977987766265869, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/170203/nth-derivative-of-tanm-x?answertab=oldest
# Nth derivative of $\tan^m x$ $m$ is positive integer, $n$ is non-negative integer. $$f_n(x)=\frac {d^n}{dx^n} (\tan ^m(x))$$ $P_n(x)=f_n(\arctan(x))$ I would like to find the polynomials that are defined as above $P_0(x)=x^m$ $P_1(x)=mx^{m+1}+mx^{m-1}$ $P_2(x)=m(m+1)x^{m+2}+2m^2x^{m}+m(m-1)x^{m-2}$ $P_3(x)=(m^3+3m^2+2m)x^{m+3}+(3m^3+3m^2+2m)x^{m+1}+(3m^3-3m^2+2m)x^{m-1}+(m^3-3m^2+2m)x^{m-3}$ I wonder how to find general formula of $P_n(x)$? I also wish to know if any orthogonal relation can be found for that polynomials or not? Thanks for answers EDIT: I proved Robert Isreal's generating function. I would like to share it. $$g(x,z) = \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^m(x) = \tan^m(x+z)$$ $$\frac {d}{dz} (\tan^m(x+z))=m \tan^{m-1}(x+z)+m \tan^{m+1}(x+z)=m \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^{m-1}(x)+m \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^{m+1}(x)= \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} (m\tan^{m-1}(x)+m\tan^{m+1}(x))=\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} (\dfrac{d}{dx}(\tan^{m}(x)))=\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^{n+1}}{dx^{n+1}} (\tan^{m}(x))=\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^{n+1}}{dx^{n+1}} (\tan^{m}(x))$$ $$\frac {d}{dz} ( \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^m(x) )= \sum_{n=1}^\infty \dfrac{z^{n-1}}{(n-1)!} \dfrac{d^n}{dx^n} \tan^m(x) = \sum_{n=1}^\infty \dfrac{z^{n-1}}{(n-1)!} \dfrac{d^n}{dx^n} \tan^m(x)=\sum_{k=0}^\infty \dfrac{z^{k}}{k!} \dfrac{d^{k+1}}{dx^{k+1}} \tan^m(x)$$ I also understood that it can be written for any function as shown below .(Thanks a lot to Robert Isreal) $$\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} h^m(x) = h^m(x+z)$$ I also wrote $P_n(x)$ as the closed form shown below by using Robert Israel's answer. $$P_n(x)=\frac{n!}{2 \pi i}\int_0^{2 \pi i} e^{nz}\left(\dfrac{x+\tan(e^{-z})}{1-x \tan(e^{-z})}\right)^m dz$$ I do not know next step how to find if any orthogonal relation exist between the polynomials or not. Maybe second order differential equation can be found by using the relations above. Thanks for advice. - Note that the formula for the derivatives of the tangent are already sufficiently complicated..‌​. you might be able to combine that result with Faà di Bruno's formula. – J. M. Jul 13 '12 at 7:47 I added an answer explaining how to generally view the generating function derivation. – Gone Jul 13 '12 at 14:00 – J. M. Jul 15 '12 at 16:25 @J.M.: I checked the other question but I am looking for $tan^m(x)$ and related polinomials. Really it is not duplicate. Thanks for the link. – Mathlover Jul 15 '12 at 18:19 ## 4 Answers I don't know if this will help: The exponential generating function of $f_n(x)$ is $$g(x,z) = \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^m(x) = \tan^m(x+z) = \left(\dfrac{\tan(x)+\tan(z)}{1-\tan(x)\tan(z)}\right)^m$$ So the exponential generating function of $P_n(x)$ is $$G(x,z) = g(\arctan(x),z) = \left(\dfrac{x+\tan(z)}{1-x \tan(z)}\right)^m$$ - Fantastic generating function you offered. Thanks. Can we find any orthogonal relation for polynoms via using that generating function? Maybe second order differential equation can be found by using it? Thanks for advice. – Mathlover Jul 13 '12 at 6:39 Could you please add or send a link how we can prove that generating function works? Thanks a lot for helps – Mathlover Jul 13 '12 at 6:45 ## Did you find this question interesting? Try our newsletter email address The formula used to obtain the exponential generating function in Robert's answer is most easily seen with a little operator calculus. Let $\rm\:D = \frac{d}{dx}.\,$ Then the operator $\rm\,{\it e}^{\ zD} = \sum\, (zD)^k/k!\:$ acts as a linear shift operator $\rm\:x\to x+z\,\:$ on polynomials $\rm\:f(x)\:$ since $$\rm {\it e}^{\ zD} x^n =\, \sum\, \dfrac{(zD)^k}{k!} x^n =\, \sum\, \dfrac{z^k}{k!} \dfrac{n!}{(n-k)!}\ x^{n-k} =\, \sum\, {n\choose k} z^k x^{n-k} =\, (x+z)^n$$ so by linearity $\rm {\it e}^{\ zD} f(x) = f(x\!+\!z)\:$ for all polynomials $\rm\:f(x),\:$ and also for all formal power series $\rm\,f(x)\,$ such that $\rm\:f(x\!+\!z)\,$ converges, i.e. where $\rm\:ord_x(x\!+\!z)\ge 1,\:$ e.g. for $\rm\: z = tan^{-1} x = x -x^3/3 +\, \ldots$ - Very good operator. Great (+1). $\rm {\it e}^{\ zD} f(x) = f(x\!+\!z)\:$ : Does not it come from taylor series expansion? If you put the proof in your answer to inform reader, it could be perfect. It is just idea. Thanks a lot for answer and advice. – Mathlover Jul 13 '12 at 14:20 @Mathlover Yes, you can view it as an formal operational form of Taylor's formula. What further proof do you seek? – Gone Jul 13 '12 at 14:29 Your answer is ok. I understood what you mean in your answer. The generator is an application of Taylor's formula. Thanks – Mathlover Jul 13 '12 at 14:35 @Mathlover To be clear, it's not intended as a complete answer to your question but, rather, to complement Robert's answer. – Gone Jul 13 '12 at 14:53 :It is very clear. Thank you for your kindness – Mathlover Jul 13 '12 at 14:56 For $m \ge 1$, $P_n(x) = m x^{m-n} (1+x^2) R_n(x^2)$ where $R_n(t)$ is a polynomial of degree $n-1$ such that $R_1(t) = 1$ and $$R_{n+1}(t) = 2 t (t+1) R'_n(t) + (m-n + (m-n+2) t) R_n(t)$$ - looks Very nice result. Could you please give me clue how you got that result. thanks a lot for answer and advice. – Mathlover Jul 14 '12 at 15:22 I have been working on the problem of finding the nth derivative and the nth anti derivative of elementary and special functions for years. You are asking a question regarding a class of functions I have called "the class of meromorphic functions with infinite number of poles. I refer you to the chapter in my Ph.D. thesis (UWO, 2004) where you can find some answers. - Your thesis on Fractional Derivatives and Integrals are very useful , really good reference. Everything is very clear. There is example for nth derivative of $tan x$ in page 109. What about nth derivative of $tan^m x$. how can it be found? Thanks a lot for sharing and answer. – Mathlover Jul 14 '12 at 15:34 read from page 111-114, there I gave identities for some powers of tan(x) in terms of the Psi function, then it is a matter of differentiating the Psi function n times. Try to work it out for tan(x)^m. – Mhenni Benghorbal Jul 15 '12 at 6:50 @MhenniBenghorbal: I read that part. you gave example till $m=5$ I could not reach general formula for $\tan^m(x)$. How to find the patern for general formula. It also looks problem to find coeffiency of Psi function for general formula. Thanks for advice. – Mathlover Jul 15 '12 at 16:12 I am just wondering how you reached this problem? – Mhenni Benghorbal Jul 18 '12 at 1:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280818700790405, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/47214?sort=votes
## How To Present Mathematics To Non-Mathematicians? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) (Added an epilogue) I started a job as a TA, and it requires me to take a five sessions workshop about better teaching in which we have to present a 10 minutes lecture (micro-teaching). In the last session the two people in charge of the workshop said that we should be able to "explain our research field to most people, or at least those with some academic background, in about three minutes". I argued that it might be possible to give a general idea of a specific field in psychology, history, maybe some engineering, and other fields that deal with concepts most people hear on a daily basis. However, I continued, in mathematics I have to explain to another mathematician a good 30 minutes explanation what is a large cardinal. I don't see how I can just tell someone "Dealing with very big sizes of infinity that you can't prove their existence within the usual system". Most people are only familiar with one notion of infinity, and the very few (usually physicists and electrical engineering students) that might know there are more than one - will start wondering why it's even interesting. One of the three people who gave a presentation that session, and came from the field of education, asked me what do I study in math. I answered the above, and he said "Okay, so you're trying to describe some absolute sense of reality." to which I simply said "No". Anyway, after this long and heartbreaking story comes the actual question. I was asked to give my presentation next week. I said I will talk about "What is mathematics" because most people think it's just solving huge and complicated equations all day. I want to give a different (and correct) look on the field in 10 minutes (including some open discussion with the class), and the crowd is beginner grad students from all over the academy (physics, engineering of all kinds, biology, education, et cetera...) I have absolutely no idea how to proceed from asking them what is math in their opinion, and then telling them that it's [probably] not that. Any suggestions or references? Addendum: The due date was this morning, after reading carefully the answers given here, discussing the topic with my office-mates and other colleagues, my advisor and several other mathematicians in my department I have decided to go with the Hilbert's Hotel example after giving a quick opening about the bad PR mathematicians get as people who solve complicated equations filled with integrals and whatnot. I had a class of about 30 people staring at me vacantly most of the 10 minutes, as much as I tried to get them to follow closely. The feedback (after the micro-teaching session the class and the instructors give feedback) was very positive and it seemed that I managed to get the idea through - that our "regular" (read: pre-math education) intuition doesn't apply very well when dealing with infinite things. I'd like to thank everyone that wrote an answer, a comment or a comment to an answer. I read them all and considered every bit of information that was provided to me, in hope that this question will serve others in the future and that I will be able to take from it more the next time I am asked to explain something non-trivial to the layman. - 6 When people ask my specialty, I say "combinatorics." When they ask what combinatorics is, I say "it's a broad subfield, but for example one important area is graph theory" and go on to say a bit about that (despite the fact that I'm not a graph theorist) because I technically am telling the truth, I think it conveys the correct spirit, and I haven't had any luck coming up with a one-sentence definition of enumerative combinatorics. [cot'd] – JBL Nov 24 2010 at 13:36 4 In your case, I would probably say something about how there are many orders of infinity and, while our intuitions about finite sets guarantee certain things about infinite sets, they aren't enough to settle all reasonable questions about infinite sets; in particular, it turns out that infinite sets come in different sizes, but we can choose (for some very large sizes of infinity) whether to include them, and study what the implications of such large sizes are. I would probably expect to not get past the fact that there are infinite sets of different sizes most of the time. – JBL Nov 24 2010 at 13:39 2 Sadly,I think trying to explain real mathematics to nonmathematical people is a little like trying to explain German grammar rules to people that don't speak a word of German. I think the best you can do in either case is motivating what you do,not really explaining it. – Andrew L Nov 24 2010 at 22:00 11 I speak no German at all, but still find tidbits about unusual features of German grammar to be interesting. And it's my understanding that linguists typically don't speak the languages that they study. – JBL Nov 25 2010 at 1:53 3 Mark Twain did a fairly decent job of explaining at least some important rules of German grammar to English-speaking people without using any German words. From a newspaper article about a tavern that burned down: "When the flames the onthedownburninghouseresting stork's nest reached, flew the parent birds away." Running that phrase together into a single word is exaggerated, but otherwise it's how German syntax actually works. (Some other parts of his account were humorous exaggerations and distortions, but certainly based on actual phenomena.) – Michael Hardy Feb 1 2011 at 0:26 show 4 more comments ## 26 Answers I have given talks about mathematics to non-mathematicians, for example to a bunch of marketing people. In my experience the following points are worth noting: 1. If the audience does not understand you it is all in vain. 2. You should interact with your audience. Ask them questions, talk to them. A lecture is a boring thing. 3. Pick one thing and explain it well. The audience will understand that in 10 minutes you cannot explain all of math. The audience will not like you if you rush through a number of things and you don't explain any one of them well. So an introductory sentence of the form "Math is a vast area with many uses, but in these 10 minutes let me show you just one cool idea that mathematicians have come up." is perfectly ok. 4. A proof of something that seems obvious does not appeal to people. For example, the proof of Kepler's conjecture about sphere packing is a bad example because most people won't see what the fuss is all about. So Kepler's conjecture would be a bad example. 5. You are not talking to mathematicians. You are not allowed to have definitions, theorems or proofs. You are not allowed to compute anything. 6. Pictures are your friend. Use lots of pictures whenever possible. 7. You need not talk about your own work, but pick something you know well. 8. Do not pick examples that always appear in popular science (Fermat's Last Theorem, the Kepler conjecture, the bridges of Koenigsberg, any of the 1 million dollar problems). Pick something interesting but not widely known. Here are some ideas I used in the past. I started with a story or an intriguing idea, and ended by explaining which branch of mathematics deals with such ideas. Do not start by saying things like "an important branch of mathematics is geometry, let me show you why". Geometry is obviously not important since all of mathematics has zero importance for your audience. But they like cool ideas. So let them know that math is about cool ideas. 1. To explain what topology and modern geometry are about, you can talk about the Lebesgue covering dimension. Our universe is three-dimensional. But how can we find this out? Suppose you wake up in the morning and say "what's the dimension of the universe today?" You walk into your bathroom and look at the tiles. There is a point where three of them meet and you say to yourself "yup, the universe is still three-dimensional". Find some tiles in the classroom and show people how always at least three of them meet. Talk about how four of them could also meet, but at least three of them will always meet in a point. In a different universe, say in a plane, the tiles would really be segments and so only two of them would meet. Draw this on a board. Show slides of honeycombs in which three honeycomb cells meet. Show roof tilings in which thee tiles meet, etc. Ask the audience to imagine what happens in four dimensions: what do floor tiles in a bathroom look like there? They must be like our bricks. What is a chunk of space for us is just a wall for them. So if we have a big pile of bricks stacked together, how many will meet at a point? At least four (this will require some help from you)! 2. To explain knot theory, start by stating that we live in a three-dimensional space because otherwise we could not tie the shoelaces. It is a theorem of topology that knots only exist in three dimensions. You proceed as follows. First you explain that in one or two dimensions you can't make a knot because the shoelace can't cross itself. It can only be a circle. In three dimensions you can have a knot, obviously. In four dimensions every knot can be untied as follows. Imagine the that the fourth dimension is the color of the rope. If two points of the rope are of different color they can cross each other. That is not cheating because in the fourth dimension (color) they're different. So take a knot and color it with the colors of the rainbow so that each point is a different color. Now you can untie the knot simply by pulling it apart in any which way. Crossing points will always be of different colors. Show pictures of knots. Show pictures of knots colored in the color of the rainbow. 3. Explain infinity in terms of ordinal numbers (cardinals are no good for explaining infinity because people can't imagine $\aleph_1$ and $2^{\aleph_0}$). An ordinal number is like a queue of people who are waiting at a counter (pick an example that everyone hates, in Slovenia this might be a long queue at the local state office). A really, really long queue contains infinitely many people. We can imagine that an infinite queue 1, 2, 3, 4, ... is processed only after the world ends. Discuss the following question: suppose there are already infinitely many people waiting and one more person arrives. Is the queue longer? Some will say yes, some will say no. Then say that an infinite row of the form 1, 2, 3, 4, ... with one extra person at the end is like waiting until the end of the world, and then one more day after that. Now more people will agree that the extra person really does make the queue longer. At this point you can introduce $\omega$ as an ordinal and say that $\omega + 1$ is larger than $\omega$. Invite the audience to invent longer queues. As they do, write down the corresponding ordinals. They will invent $\omega + n$, possibly $\omega + \omega$. Someone will invent $\omega + \omega + \omega + \ldots$, you say this is a bit imprecise and suggest that we write $\omega \cdot \omega$ instead. You are at $\omega^2$. Go on as far as your audience can take it (usually somewhere below $\epsilon_0$). Pictures: embed countable ordinals on the real line to show infinite queues of infinite queues of infinite queues... - 8 +1, but I'm afraid "we can choose color to be the 4th dimension, so if 2 points are different color they can pass through each other" would lose every nonmathematician I know. – JBL Nov 24 2010 at 13:45 34 Well, actually, I asked "what is the fourth dimension?" and someone said "time", and then I explained it need not be. I showed pictures of colored knots and I told people to imagine that the color is "another dimension" just like when they look at a geographic maps and color represents altitude above sea, which is another dimension. I think it went over well. – Andrej Bauer Nov 24 2010 at 14:38 7 Actually, I thought the idea of using color was very neat; the two points never actually touch if they have different color coordinates. – Todd Trimble Nov 24 2010 at 15:40 12 I once made a ceramic model of a Klein Bottle which was colored so that it did not have any self intersections when you considered color as the 4th dimension. – Steven Gubkin Nov 24 2010 at 16:29 11 Ordinals are a great example, I think. I am sure I'm not the only one who's played the "I dare you times infinity!" "I dare you times infinity plus one!" game as a child, and it might be fun for people to learn that this can be made precise. – Qiaochu Yuan Nov 24 2010 at 20:01 show 7 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here is one example of what is mathematics as opposed to "layman reasoning" that I was told about by Yuval Peres. During WWII, the British airplanes were often shot down by Germans because the armor was too weak. The decision was made to put some extra armor on the planes but, since you can add only that much weight to a plane without affecting its performance, only a few selected areas could be enforced. Upon looking at the damaged planes that returned to the base, the engineering committee recommended to put armor to the most frequently damaged areas, as seen on the planes available for investigation. It took a statistician to explain why you should do exactly the opposite. - 7 I don't think this is what distinguishes mathematics from layman reasoning. It is rather what distinguishes a good model from a bad model. Mathematics takes models as an input. Of course, many mathematicians have some above-average talent at telling good models apart from bad ones, but this is not what mathematics is about. – darij grinberg Nov 24 2010 at 14:11 14 Why should you do exactly the opposite? – Qiaochu Yuan Nov 24 2010 at 14:20 38 @Qiaochu: I am guessing that the reason has to do with sampling bias. The reason that these planes managed to make it back to base is probably because their undamaged regions are critical for their functioning. So, their undamaged regions should be reinforced. – Tony Huynh Nov 24 2010 at 14:25 12 Or, in other words: the most-damaged places on planes that survive are places that are inessential to flight. – JBL Nov 24 2010 at 14:38 4 One can do even better if one has a good estimate on the probability that every square inch of the surface of the plane gets hit with a bullet. The "exact opposite" decision above means they assumed uniform distribution. – timur Nov 27 2010 at 17:41 show 3 more comments My own "go-to" introduction is Euler characteristic. What is nice about it is that you can have tons of audience participation. First draw a bunch of polyhedra on the board (or have models that you can distribute to the audience). Ask people to count the number of faces, edges and vertices. Write a few of these down on the board. Ask if anyone sees any patterns. Usually at least one person will notice that F+V-E=2. A combinatorial proof of this, by first reducing to the planar case, triangulating, and then removing triangles from the outside in, showing that at each stage you are leaving F+V-E invariant, is something I have had success with even with highschool students (at least one on one). At each stage you can have someone in the audience confirm the invariance ("What happens to F+V-E when I take a triangle like this away?") Have a triangulated torus already prepared. Observe that F+V-E=0. Tell them that in general an "n-holed" donut has F+V-E=2-2g where g is the number of holes. So somehow this number F+V-E depends on the whether you could stretch one of these shapes into the other, but not on the rigid geometry. Explain that a similar combinatorial proof would be difficult for the n-holed donut, but their is an entire subject called "algebraic topology" which has developed machinery which makes this kind of result easy to see. - 2 You can also use a deflated soccer ball for this demonstration. This way you can show that the number is measuring something about the ball, but not a rigid thing. It is not exactly its "roundness", but something else. Then whip out the torus. – Chris Schommer-Pries Nov 24 2010 at 19:50 1 You can also talk about how people living in a 2-D universe could tell what `shape' it is - how do we generalise this to the 3-D case. By far the best tool I've ever found for this is the Asteroids game play.vg/games/4-Asteroids.html Anyone who's ever played it will love the fact it's a universe with genus 1. – Ollie Margetts Nov 25 2010 at 4:26 @Ollie: then you'll love geometrygames.org/HyperbolicGames . – Qiaochu Yuan Nov 25 2010 at 19:13 There is this nice quote whose wording I can't quite recall. It is something like "physics is the study of the laws of God. Mathematics is the study of the laws even God must follow." I think there are some nice elementary examples of this in certain areas of combinatorics. Consider, for example, Ramsey numbers: if $6$ people are at a party, either $3$ of them all know each other or $3$ of them all don't know each other (but this is not true for $5$ or less people). That's something most people don't know, it's really easy to demonstrate by picking six people from the audience, and it is of a totally different flavor from the "equational" mathematics most people are familiar with. (Caveat: I have never actually tried this demonstration.) You can then continue: if $18$ people are at a party, then either $4$ of them all know each other or $4$ of them all don't know each other (but this is not true for $17$ or less people). Then you continue: the corresponding best number for $5$ people is not known. This is just about the most easily stated open problem I know, and it is a good way to show people that mathematics is not "finished" in any meaningful sense. If you were sufficiently handwavy and included lots of pictures, it might even be possible for you to sketch the proof that all the Ramsey numbers exist. Another potentially good example is Hall's marriage theorem, especially if you use the marriage-theoretic terminology the entire time. I saw a lecturer do this recently and it was quite funny. Again, if you use enough pictures, this might be manageable to sketch. - Both the links are missing apostrophes. I can't fix it because (it seems) I don't have enough reputation to put links in answers. – Max Nov 24 2010 at 15:57 The links don't appear properly if I add in the apostrophes. I'm not sure how to escape them. – Qiaochu Yuan Nov 24 2010 at 17:40 Ok, fixed: you can use %27 to escape the apostrophes. – Max Nov 24 2010 at 18:54 3 +1 for "marriage-theoretic terminology" :) – Nick Salter Nov 24 2010 at 23:35 7 I found the source of the quote again! It's from Mumford's foreword to Parikh's _The unreal life of Oscar Zariski_: "Everyone knows that physicists are concerned with the laws of the universe and have the audacity sometimes to think they have discovered the choices God made when He created the universe in thus and such a pattern. Mathematicians are even more audacious. What they feel they discover are the laws that God Himself could not avoid having to follow." – Qiaochu Yuan Dec 12 2010 at 2:06 show 5 more comments For some reason, many mathematicians have trouble with the idea that when some layman asks them about their work, the appropriate response is not to try to figure out how to describe the latest theorem you've proved. We seem to feel like we're "selling out" unless we try to describe all the technical subtleties of our most recent work. Some other professions seem much better at this: when a psychologist is asked about what they do, they don't reply with their intricate struggles to minimize bias in their latest experiment, even if that's what's occupying most of their attention that week. (On the other hand psychologists can say "I devise experiments to study the long-term effects of alcohol abuse on cognition" and be reasonably confident that this will make at least some superficial amount of sense to a generic college-educated person. If I say "I study period-index problems in the Galois cohomology of abelian vareties" then notwithstanding the considerable syntactic similarities between these sentences, the social effect could hardly be more different: I might as well say "Please go away".) I have to say that I feel privileged never to have had to prepare a ten minute (or less!) précis of my work to a general audience in any kind of formalized setting. I agree that that does not sound like much fun -- for me or the audience -- and if asked to do so at the point in my life I would raise my eyebrow and begin to question (inwardly at least) the assumptions and goals of the person who wanted me to do so. However, one of the necessary evils of socializing with people outside of the mathematical sciences is that you are inevitably asked "What do you do?" in very informal settings. Usually my first answer is that I'm a mathematician, and my second answer is either (depending upon my mood?) that I'm a number theorist or that I'm an arithmetic geometer. The second answer is more ambitious, because after the expected "What's that?" I have to explain that I work in sort of a hybrid of two fields, number theory -- the study of properties of the whole numbers like primes and divisibility -- and algebraic geometry -- the study of the curves, surfaces and higher dimensional objects that arise as solution sets to polynomial equations. When I'm on and the other person cares I can get all this out in a couple of minutes without causing any obvious trauma. If they want to hear more than this I often state Fermat's Two Squares Theorem. I think this is nice because it's specific and it's relatively simple but certainly not obvious: indeed it's a little window into how pleasantly surprising mathematics can be: why should there be such a nice, clean pattern like this? Of course this is number theory of 350 years ago not of today, but this was the theorem that attracted me to number theory in the first place, when I first learned about it at the age of 16. (Actually, in my more recent career I have spent time thinking about different proofs and generalizations of exactly this result. But while speaking to a layman I probably wouldn't even remember that.) I should say that sometimes I get completely cut off in my statement of the two squares theorem -- I mean cut off in the middle of a sentence. And then I have often gone on to have quite a pleasant conversation on something else entirely. In fact, my most negative experiences in the "What do you do?" game have come from non-math people who have insisted on hearing about exactly what I've been working on, with all the technical terminology. For instance, shortly before I received my PhD I went to a bar with my cousin and somehow found myself at a table full of medical students and residents. The above gambits were not sufficient for them. At one point one of them demanded to know the title of my thesis. "All right: it's `Rational points on Atkin-Lehner quotients of Shimura curves'." His reponse? "Okay. So basically you study points on curves." He said this with the smug pleasure of someone who had demonstrated that once all the big words had been stripped away, the Harvard PhD student was actually studying something very simple and childish. Of course having omitted all the big words, even an expert wouldn't have the slightest clue as to what the title meant. What a jerk. - Pete, I somehow got to read this again. I can say that my reply to the medical student would have been to diminish him to a butcher or something similar. Being a jerk can be fun, especially when alcohol is involved! :-) – Asaf Karagila Aug 15 2011 at 12:45 If you start with the phrase "Rational Points on Atkin-Lehner Quotients of Shimura Curves" and remove all the words the medical student did not know, you are left with "Points on of Curves" - almost precisely what he echoed back to you. :) – Austin Mohr Nov 22 at 20:37 If you haven't heard of it already, Timothy Gowers has written a lovely little (150 pages) book called Mathematics - A Very Short Introduction http://books.google.ie/books?id=DBxSM7TIq48C It's a great read and I'm sure you will get some nice ideas from it. - 16 That said, I know exactly how you feel. I took a "generic skills" course called "Information, Communication, and Literacy". It was at around the same point (when we were asked to convert our research questions into soundbytes) that I finally quit. I'm going to suppress the rant and just suggest that you should stand up to this kind of twitterification. A cool thing to do would be to say "Go read a book" and suggest a reading list. – Sonia Balagopalan Nov 24 2010 at 10:07 16 I think we need to distinguish between popularization of math and real math. Popularization of math is important there should be "soundbytes" that try to explain to ordinary people what math is. If you tell ordinary people "go read a book" they will just think you are an arrogant jerk (which you are if you say that). Of course, when it comes to real math the story is different. Math presented in papers, funding proposals and conferences should be done properly. – Andrej Bauer Nov 24 2010 at 12:02 5 @Andrej: I'm a big fan of math popularization, but it is notoriously difficult to do well; one might even argue that it's enlightened non-mathematicians who do it best, and the books in my library seem to agree. I think that asking a graduate student to summarize their research topic in a soundbite is quite unfair, and has nothing to do with popularizing math (though students should be warned that such requests tends to happen quite a bit during hiring season). – Thierry Zell Nov 25 2010 at 2:05 A big part of math is about transformations and structure. For a wide audience in a short time, you can give an inkling about this in a short time along the following lines: • "Numbers'' are really abstract things, not just "quantities''. They can correspond to transformations --- for example, dilations and translations of the line. Show how this corresponds to multiplication and addition. Negative numbers are flips. This explains "negative times negative is positive," and shows $x^2=1$ has two solutions. • Solving $x^2=-1$ corresponds to answering the question "What can you do twice to get a flip?" Likely as not someone will think of rotating 90 degrees. Dilations, translations and rotations of the plane are complex numbers. • "What about rotations in 3D?" Demonstrate they don't always commute. A lot of math is about understanding the rules and concepts that govern much more general transformations of complicated kinds of data. - I realized recently that one problem with these types of talks is that (in the US at least) the audience often has no idea where mathematics comes from, even in a naive sense. In psychology, physics, engineering, etc, most people have a vague sense of what the roots of the discipline are. But in mathematics, particularly pure mathematics, the sense of purpose of mathematics is what is usually missing from a lay audience member. I have found the work of Saunders Mac Lane in Mathematics: Form and Function to be very helpful when discussing mathematics with non-mathematicians. Once these ideas are "in the air," then the specific problems like Euler characteristic, graph coloring, etc, have a context in which to be appreciated (I agree that trying to discuss specific research problems at the lay level is typically impossible in pure mathematics). Mac Lane's argument is very roughly the following: 1. Human cultural activities lead to 2. Recognition of mathematical ideas, which lead to 3. Mathematical formalism. There is a nice table on the wikipedia page about Mac Lane's book with lots of examples of this. If I am giving a talk about what mathematicians do, or what mathematics is about, I discuss Mac Lane's ideas first to set the stage for what is to come. It is remarkably easy to discuss this in just a few minutes, and might be helpful for your situation as well. - As has been said, the main point is to give up the idea of communicating your actual research topic. Even job seekers giving colloquium talks should usually not attempt this. The best such talks instead teach the audience, even mathematicians, the simplest underlying ideas of their subject, and only mention the direction of their own work in the last few minutes or so. When discussing infinity with laypersons, I have often used "Hilbert's hotel", in which the infinitely many rooms are all full when another guest arrives. Everyone moves up one room and the new guest goes in room 1, thus showing that infinity plus 1 equals infinity, (as a cardinal). The audience easily figures out how to add 2 new guests or a thousand. Next ask them how to deal with an infinite sequence of new guests, which if course may be all placed in the odd numbered rooms, as the current guests each move from room n to room 2n. When teaching topology I asked how to tell if an invisible butterfly net actually enclosed the butterfly, by looking at the winding behavior of the visible border of the net. when discussing higher dimensions, it is easy to get people to eventually visualize a 4 dimensional sphere as a family of three dimensional spherical slices, by starting with lower dimensional cases. Noting that one can escape a circle in the plane by jumping over it in three dimensions, point out that if one goes back in time, before the building one is in was built, one can escape a three dimensional room without breaking down the walls or opening the doors. edit: To convey the idea that knowing "how many" differs from knowing when two sets are "equipotent", I used the example of the cyclops in Ulysses, who knew when all his sheep were back in the cave, by matching them up one to one with a pile of rocks. Nonetheless he did not know "how many" sheep he had. I.e. "what number?" differs from "same number". - Besides what Sonia suggested, also take a look at the classic What is Mathematics by Courant and Robbins. It is a bit long, so I am not sure how much you can pick out of it in one week, but good luck. For preparing your talk, you may want to consult some of the advice given by VI Arnold. A good way to start any general discussion about mathematics, I think, is to remind people of the definition of the word mathematics. The original Greek term, μάθημα, means study/learning/science, which I think says a lot about the typical, somewhat idealized, mathematical worldview. That said, your work is significantly harder than that would be mine were I in your position. Large cardinals, set theory, and logic are much harder to explain to the average audience than something pedestrian like PDEs or sexy like mathematical physics. - The (implicit) message, of course is that the level of abstraction varies in inverse proportion to the ease of explaining it satisfactorily to non-specialists... (not to knock down anything; I've been in that uncomfortable position myself.) – J. M. Nov 24 2010 at 11:21 Mathematics is about the reasoning that can be made precise. The different branches of mathematics reason about different objects: numbers, shapes (rigid or stretchy), games, arrangements and relations, and other things for which the words do not exist in the everyday language. There are some branches of mathematics exploring the reasoning itself. Pretty much any set of rules of reasoning one can normally think of is equally powerful (we say equiconsistent). The large cardinals is a name of for various rules that are stronger. To explore reasoning about different things, give a puzzle to the class. Many people like them, and do not think of these as maths. I would consider "hats" puzzles, or some topological puzzle (e.g. is this picture an unknot? are there two antipodal points on the surface of the Earth with the same temperature? etc). There are good puzzles books to get ideas from, such as those by Martin Gardner and Raymond Smullyan. - You absolutely cannot explain the details of a research question in ten minutes, but you should definitely be able to explain the philosophy -- in the literal sense of "love of wisdom" -- of your problem in that much time. Mathematics lets you take really primal ideas (like continuity, symmetry, smoothness, shape, proof, truth, size, chance and information) and make them precise. This is really wonderful, and it's very much to your advantage to be able to communicate this wonder to others -- this is really what distinguishes mathematics from accounting or chess. Anyway, in ten minutes, you can't communicate all of this, but you should be able to show off a gem or two. The go-to subject for this kind of demonstration is group theory, but if you want to focus on your own area, you have a big advantage. Large cardinals belong to logic, and so numerous great mathematicians have spent an enormous amount of effort trying to understand and explain them. For example, you can motivate large cardinals by giving a safe impredicative definition, such as the greatest lower bound of a set, and then contrasting it with some unsafe ones, such as Russell's paradox or the Liar paradox. This tension -- between the enormous practical utility of impredicative definitions and their tendency to open the door to paradox -- can be used to motivate large cardinals via the question of how close to the edge is safe. Note that the question here is really a basic one: when does a definition actually define something? A good resource is Poincare's essay "Logic and Mathematics, II", in which he argues for predicativism. Large cardinals are sort of the maximal rejection of his view, but Poincare is such a good writer and thinker that he's worth reading if only to get the most beautiful exposition of the alternative. - 13 You cannot give a definition, that's way too complicated. Ordinary people are not familiar with the idea that something can be defined at will. – Andrej Bauer Nov 24 2010 at 11:58 4 The poster said his audience would be new graduate students from the sciences and humanities. I would expect them to have an intuitive idea of what a definition is, and to have seen some definitions before in their own studies. If they haven't realized that "what is a valid definition?" could even be an open question, then this is good! The demonstration that it is a nontrivial question which nonetheless has rigorous answers could be an enlightening thing to teach. – Neel Krishnaswami Nov 24 2010 at 14:48 I should add that I think your answer is fantastic, though. – Neel Krishnaswami Nov 24 2010 at 14:51 I know it is kind of trite, but why not start out by saying that most people think that there is only one kind of infinite set (and all mathematicians thought so until 150 years ago) but in fact there are several kinds. In particular some, like the rational numbers, can be arranged in a list (sequence), while others like the real numbers (infinite decimals) cannot---and then give Cantor's diagonal argument. - 9 What I've heard several people say is that it is very hard to get most non-mathematicians to accept Cantor's diagonal argument. For example, Dick Lipton and Terence Tao have both blogged about this several times, and Kevin Buzzard says something similar in the question linked to in the comments. So I don't know if this is the best idea. – Qiaochu Yuan Nov 24 2010 at 17:42 4 I think this is because Cantor's diagonal argument is not about real concepts but made up ones. If you take formalism seriously, then you have to start by explaining formalism to your audience. "Mathematics is about figuring the logical consequences of ideas we have made up, according to a specific notion of logical consequence that we have made up." From a layman's perspective, there is no reason to accept infinite sets, or that the existence of a bijection gives a meaningful equivalence relation on sets, or that the equivalence classes ought to be studied. – Alexander Woo Nov 24 2010 at 21:57 1 @Alexander: right. You have to fight against your audience's intuition, in a bad way, to convince them that Q and N have the same "size" but Q and R do not, and 10 minutes is not really enough time to do this... – Qiaochu Yuan Nov 24 2010 at 23:11 Even as a math student at the undergraduate level, I had a hard time understanding Cantor's diagonal argument: it took me at least three explanations (from the textbook, from the lecture and finally from a fellow student) to understand how it works, although now I work with similar arguments all the time (I'm currently working on a Ph. D. in set theory!) However, explaining why $\mathbb N$ and $\mathbb Z$ have the same size, although intuitively $\mathbb Z$ contains "twice" as many elements as $\mathbb N$, would be a really good idea as the argument is not nearly as involved as Cantor's. – David FernandezBreton Mar 13 2011 at 7:36 Probably talking about the set $\mathbb Q$ is also involved, since I have seen first year math undergrads having problems with understanding the concept, but it's possible to talk about, for instance, the set of even numbers, or the set $\{n^2|n\in\mathbb N\}$, both of which have also the same size as $\mathbb N$ although they seem "smaller"... this should be enough to let the audience appreciate how counterintuitive the concept of infinity can be, and we don't need to jump to uncountable sets to generate that effect. That's exactly the approach followed by Hans Magnus Enzensberger in his book – David FernandezBreton Mar 13 2011 at 7:41 show 2 more comments You can look at the argument in Mendelian Dynamics and Sturtevant's Paradigm, it combines a nice application In the 1913 paper ”The linear arrangement of sex-linked factors in Drosophila, as shown by their mode of association” Alfred Sturtevant11 , long before the ad- vent of the molecular biology and discovery of DNA, has deduced the linearity of the arrangement of genes on a chromosome from the statistics of simultane- ous occurrences of particular morphological features in generations of suitably interbred Drosophila flies. Thus he obtained the world’s first genetic map, i.e. he determined relative positions of certain genes on a chromosome, where he used his ideas of linearity and of gene linkage. and a simple but conceptual maths argument: On the mathematics side, Sturtevant’s reasoning may seem to be limited to the banal remark saying that if in a finite metric space the triangle inequality reduces to equality on every, properly ordered, triple of points then the metric is linear, i.e. inducible from the real line. But this is not exactly what is truly needed as the Sturtevant’s linearity is more about the order or, rather the ”between” relation, than about metrics. Another suggestion coming from the same paper is Hardy-Weinberg Principle for Allele Distributions; it is different but possibly more familiar maths to your audience, and you may add a personal touch (Hardy was proud to be a pure mathematician, yet arguably this is his most cited result nowdays). - Though I am not teaching on a regular basis, I often explain what mathematics is to laymen. My explanations tend to converge toward the following lines: 1) Mathematics is poetry: 2 quotes: Quote 1 : Mathematics is the art of giving two names to the same thing AND the same name to two different things ( HENRI POINCARE). Quote 2 : In mathematics you have an absolute liberty, the price to pay for this is that you have to be very precise ( YURI MANIN). 2) Maths is made of observations and rendering them with an eventual need to make up a new language, just as anybody would need one in a complex and professional field (say dancing). 3) THE WORLD OF A SURFACE IN ITSELF Then show them a band a paper,make it a cylinder (2 faces, 2 circle boundaries) . Then link it with a twist and ask them to count the boundaries and then faces. ( They will be astonished and see the difference for themselves ...) 3.a) On a Moebius band a river drawn in the middle ( a blue pencil will do) has only one bank( Let them check it), it is a different world if you live on it ( you are little bugs with no sense of the third dimension) 3.b) Tell them about the game of cylindrical chess (played on a torus, abstracting the game if necessary ( just moving pieces) those who know the rules of chess feel more at ease. Show them the game while remaining flat, then tell them that for someone really dumb you could imagine to produce a real torus by bending the board and using magnetic pieces. As a world (a fighting world for example) make the observation that proximity is changed.... (chess serves as a surrogate topology in this, but you do not have to pronounce the frightening word topology). 3.b) Back on the Moebius band : the game of chess on it is not the same as the cylindrical one: a piece does not move the same way and does not aspects the sames cells... 4) DIFFERENT WORLD AND VIEWS : Now take a band make it a cylinder with a knot first ( need a band long and thin enough) put it side by side with the normal cylinder and ask them is it the same. After their answer your is of course yes AND no (the ambient or the embedded surface) . A matter of point of view. 5) USEFULNESS OF MATHEMATICS: Of course there lots of applications but the killing example is : In 400 BC Greeks were doing land regrouping (consolidation),each land was measured by willing geometers. A year later there was plenty of lawyers at work because the pieces of land had been measured by perimeter!! Tell them that it might seem obviously stupid to do so, yet basic school told them about surface concept. Moreover using the perimeter might be a good way to do things if the goal was not farming but showing off with high flags and poles. Again many points of views blablabla... NOTE : The interactivity is essential at least when checking the boundary of MB with the finger or sight for some. This is a close call for ten minutes, part 5 can be removed. Try it on some none mathematical friends first after three times you will be probably quite sleek. It is also important to have the right length and width for the band paper 12 inches by less than one roughly usually the side of a sheet of paper... - I've come across the same issue (that it's difficult to explain math to non-mathematicians) many times, but inspired by this thread I decided to think more precisely about possible causes and solutions to this problem. I was further inspired because my dad happens to be in town, and when I tried to explain set theory to him, one response I got from him was essentially the same one you got, "so you're trying to describe ultimate reality." First, I think the following important features about mathematics in general (i.e. not just set theory specifically) are unknown to most people: 1. Math is vast - People somehow know that most academic fields are vast, but they don't know this about math. For instance, someone who has only studied up to classical mechanics has still heard about general relativity, classical mechanics, fluid dynamics, electricity and magnetism, etc. On the other hand, a lot of people honestly think there's nothing more to math than matrices and calculus. 2. Math is about cool ideas - It's about coming up with cool ideas, exploring them, figuring out facts about them, and proving this facts using both creativity and logic. It's not about crunching out numbers using complicated formulas. 3. Math is not about the real world - This is an overgeneralization to the point that it's false, but it might be closer to the truth than what your audience thinks math is about. Historically, math has absolutely been about modeling the real world and solving real world problems, but for various reasons, much of modern math is done purely for its own sake, and the content it discusses is very abstract. People will try to relate what you explain to them to something they already know, and this is an entirely natural thing to do, but it's almost surely bound to miss the point. 4. Math is new - People will often look at you quizzically when you say you study math, and ask you, "what's there left to figure out?" Although the formulas of single variable calculus that they're familiar with have been figured out for centuries, there are constantly new questions arising in math, especially since math is so vast. 5. Math is a different language - First of all, there's a lot of technical strange-sounding vocabulary. Secondly, there's a lot of technical familiar-sounding vocabulary that means something different in natural language - be careful about how you use the word "axiom" for instance. When it comes to set theory specifically, if you get a response like, "so you're trying to describe ultimate reality," you have to spend a bunch of time convincing them that current set theory isn't really about anything that they already know or familiar with. If you manage to do this, then you'll be faced with the following question, "then what's the point?" This is where some historical motivation becomes necessary I think. So you could explain that Cantor started the study of sets almost 150 years ago because people were starting to study the real numbers, and functions on them, in more and more sophisticated ways and talking about infinity in more and more sophisticated ways, and so this "required" a more sophisticated, rigorous framework for talking about these things. This naturally led to asking more precise questions about infinity. Thinking about the real line as set of points and not just a geometric line, and thinking about functions as objects that act on points, was quite a novel idea at the time. You can try to explain the importance of bijections to the notion of size and counting, and then state and vaguely explain that $\mathbb{N}$, $\mathbb{Z}$, and $\mathbb{Q}$ have the same size, but $\mathbb{R}$ is strictly bigger. Emphasize that $\mathbb{Q}$ only seems bigger than $\mathbb{N}$ because the way it's arranged not because it actually has more things. Then introduce Cantor's Problem - is there a subset of the reals that's a bigger infinity than the naturals but a smaller infinity than the whole set of reals? Then I'd talk about Hilbert and his famous problems, the first two of his problems being about putting math on a rigorous logical foundation. Talk about how truth and provability can be formalized, what it would mean for a formula to be true (in some model) but unprovable (from some axioms), and then talk about Godel's groundbreaking results on incompleteness. I would then come back to set theory, mentioning that a great variety of questions had come up in set theory, a good deal of them had turned out to be undecidable, including Cantor's Problem. - Set theory is probably not about "describing ultimate reality", but it certainly has to do with describing what can be thought of as the "ultimate building blocks of mathematical reality". Of course, let's not use this as an explanation of math to a nonmathematician (although, as far as I know, that's the usual explanation of what math is about for philosophers -once I told a philosophy professor that I was working on number theory, and he immediately replied asking whether that had to do with coming up with the "right" definition of number-). – David FernandezBreton Mar 13 2011 at 7:59 Vi Hart is a master at doing the opposite: Presenting mathematics, which on the surface seems to be aimed at laymen, but is actually aimed at mathematicians in my opinion. Granted, those mathematicians might be young, and may not have realized yet that they actually are mathematicians. This video from her is actually fairly deep, while following Andrej Bauer's 8 suggestions above almost to the letter. - Although the speed of her presentations is not really to my taste, I have to say that I find her basic schtick pretty clever: "So you're sitting in your boring trigonometry class doodling while your teacher is droning on about..." and meanwhile there's all this really cool, interesting mathematics in the doodling. So subversive and witty! – Todd Trimble Oct 25 at 15:12 I would suggest having a look at the opaque square problem. The story of the problem is the following: Suppose you own a square piece ofland and you are being told that a phone line runs through it. As you have no phone and internet connection yourself, because the phone company cannot, or will not, provide it, you want to find this line and rig up the thing yourself. Now the question is: how long is the shortest trench you would have to dig to find this phone line? If you restrict yourself to a connected trench the optimal solution is a Steiner tree for the four corner points. For a trench system consisting of two connected parts, there is also a shortest solution known, which is shorter than the one consisting of one connected set. If you only ask about the shortest trench without restricting yourself to trenches with at most n components this problem is, to the best of my knowledge, still open. It is actually quite fun and usually leads to a lively discussion if you let the audience guess what these trenches should look like, probably with some support. This problem can also be taken to the next dimension, by looking at the opaque cube problem, or any other shape you like. Also there are different stories one could choose to introduce this problem. - 1 Do you mean to imply that the phone line is a straight line? – Zsbán Ambrus Jan 31 2011 at 21:33 I always explain that math is a way of creating new knowledge from old using two things: deductive logic and abstraction. Then I try to give reasonably nontrivial examples of each. But of course I can't remember right now what exactly I say. For the latter, if there's enough time, I like to use groups as abstracting both numbers and geometric transformations. - Their are two things I say to people when I am in such a situation: 1) I think of math as being divided into 3 parts: Algebra, Analysis and Topology. Each of these comes from starting with a set and looking at different types of structure on it. Algebra is about taking two things in the set and asking how you can make another element of the set given those two things. (or maybe take 3 things! etc) Analysis comes from taking two things in your set and asking what the distance is between them. Topology is about seeing when two things in your set are close two each other (it gets hard to convince them this is different from analysis, but it is doable). Now first I should mention that this is a very simplistic description and probably incorrect, but you only have ten minutes and the audience can feel like they learned something about the field, or at least how it is put together, or was at least 100 or more years ago. This also might not appeal to you since I haven't said anything about set theory, but this does give you a jumping off point for talking about what you can do when you don't have any of those more "sophisticated" structures lying around. 2) I study algebraic topology, so I try and talk about how you might try and differentiate two geometric objects or rather determine if they are in the same homeomorphism class. First you have to say when you will be thinking of two things as the same. This seems a bit strange to people, but remind them of congruent triangles and how natural it is to think of congruent triangle as the same thing when they are in different places. Then tell them it is a lot like that, but with different rules: no tearing or untearing/gluing. Next you can talk about rudimentary invariants like things being path connected, if I stay inside the space can I get o every other point. Next, when i remove a point is it still path connected? etc... This gets very strange and hard quickly so you need some new tools to accomplish your goal. So you talk about $\pi_1$ which is not too hard to convince them that they understand. The best space to help them compute the fundamental group of is $\mathbb{R}^2-0$, that is something they can wrap their heads around, but don't mention the group structure. And I talk about how this invariant can detect differences, sometimes, but it doesn't tell you when things are the same. That takes about 30 minutes or so at least, and it is not about set theory. My recommendation would be to take that model of presentation of a field and use it in the following way. There are certain types of problems in fields, the big problems you know, like classification problems, enumeration, and computation (I am sure there are other big schemes for programs, I just can't think of them, or don't know them). There must be some big program in set theory that you can talk about, and look at early toy problems that people may have tested the theory on. Pick some small examples and use imprecise and soft words that are non technical. Try to draw a picture and don't use mathematical symbols. Maybe you could talk about how $\omega = 1 + \omega \neq \omega + 1$ (where $\omega$ is the first ordinal after all of the finite ones, maybe I have it backwards or wrong). That is kind of cool, and doable... i think. Anyway, that is what I do when people ask, hope it helps. (I also have a similarly watered down explanation of Spectral Sequences) Again, apologies for misrepresentations or inaccuracies, no disrespect is meant. - I find that, when working with non-math people, to approach the mathematics from a problem solving perspective. Why did anyone find it so important to develop this in the first place? Where can this be used today? Students tend to believe that mathematicians live some kind of arcane life behind academic walls. In fact, many of the topics covered in undergraduate math programs come from real world problems that someone had to solve. When students are posed with the original issue or a current day issue that uses the mathematics, their interest and willingness to work increase dramatically. The same thing goes for me in another discipline. For example, take history. Knowing what happened at the battle of Verdun in WWI is not all that exciting. However, if you look at the situation from both sides just prior to the battle, you start to see why it is important and why the leaders made the decisions that they did. Math educators need to focus on learning for the long term, not short term 'can you remember this'. Student should accept nothing less. - The second and third sentences of your answer are things that should be taught also to mathematics students! But then you have think how to assess this knowledge and expertise so that students take it seriously. At Bangor we developed a course in "Mathematics in context"; there is an old debating club tag: "Text without context is merely pretext." – Ronnie Brown Apr 6 2012 at 21:08 The Euler Characteristic (Steven Gubkin above) is potentially a good option - it feeds into, for example, how to make a football, and why there are pentagons in the domes at the Eden Project: but that is more than 10 minutes. Another option is talking about the number line - and showing you can cover the rationals in intervals with intervals of arbitrarily small total length: related to the infinities question above. But I sometimes use this to show that the number line (which is used from primary school upwards) requires some careful thinking - and if I have time I explain how this feeds into that mathematics of continuity and change (a mention of Zeno's paradoxes gets in too). If you want a 'why do maths' question such stories as the making of nuclear bombs/reactors (making sure that the bombs explode, but the reactors don't), or sending men to the moon and getting them back could provide a narrative (not just how much fuel, but how long does it take, therefore how much food etc) could provide motivation. But if you are looking for material in relation to schools, the Mathematical Association has a good list of resources, which might also give some ideas. - 5 Yeah, the production of nuclear bombs is a big motivation to do mathematics .... – Martin Brandenburg Nov 25 2010 at 12:30 Do half-sphere shaped domes need pentagons? – Zsbán Ambrus Jan 31 2011 at 21:17 Since you deal with higher ordinals, 10 minutes seems like more than enough time to prove that there are more real numbers than natural numbers. I think most people can follow the "list of decimals" proof. After you do this, your audience is ready to believe that there are different infinities. Now, exlain that YOU are interested in the problem characterizing these infinities. You want to show them that there are limitations on proving the existence of higher ordinals. Of course, you cannot do this. But, for example, you could use Russell's paradox to show that there are subtleties that one must take into account. Good luck! - You might like to look at the article "Making a mathematical exhibition" http://pages.bangor.ac.uk/~mas010/icmi89.html With regard to content we agreed that the aims were to: 1. suggest that the making of mathematics is a natural human activity, part and parcel of the usual methods by which man has explored, discovered, and understood the world; 2. present each item with a purpose and context, and not just because it was something that could be shown or demonstrated; 3. convey an impression of some of the key methods by which mathematics works; 4. show mathematics in the context of history, art, technology and other applications. (In the end, item 4. was much too ambitious! But it led to a collaboration with John Robinson in presenting his Symbolic Sculptures.) The subject of knots is suitable for conveying things about mathematics. You can see the exhibition "Mathematics and knots" at http://www.popmath.org.uk In this area we could present the methods of: 1. Representation 2. Classification 3. Invariants 4. Analogy 5. Decomposition into simple elements (and I would also include, laws of combination) 6. Applications See also articles on my web page on "Popularisation and Teaching" www.bangor.ac.uk/r.brown/publar.html This experience of popularisation, and presentations to 13 year olds in Masterclasses, proved very useful when I was invited to give talks to a wide range of scientists, who are very interested in what conceptual advances are being made in mathematics (rather than solutions to "million dollar problems"). See arXiv:math/0306223 , to a conference on theoretical neuroscience. - If I had to explain large cardinals in 10 minutes my first source of inspiration would be Kanamori's excellent writings about their (whiggish) history. People like history. I wouldn't spend any time trying to justify or prove any result. Maybe 2 minutes to explain the cumulative hierarchy so that everybody understands the typical picture of V. If people guess there is a long coherent tradition of research starting from Cantor, hence of interest in the field, I think that's enough. They really don't need to understand anything, just have a general feeling of a sort of flow of ideas. - I attended a fantastic lecture by Catherine Roberts of Holy Cross, which was accessible to me as a lowly calculus student. She explained the process of making a mathematical model of rafting trips down the Colorado River without using any "scary" equations. Despite the absence of "scary equations," she demonstrated how useful math is. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9650436639785767, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/21136-airline-probability.html
# Thread: 1. ## airline probability airline passengers arrive randomly and independently at the passenger screening facility at an airport. The mean arrival rate is 10 passengers per minute. What is the probability of no arrivals in a 15 second period. 2. Originally Posted by dclift airline passengers arrive randomly and independently at the passenger screening facility at an airport. The mean arrival rate is 10 passengers per minute. What is the probability of no arrivals in a 15 second period. The number of arrivals in a time interval of length $\tau$ has a Poisson distribution with expected number $\tau \times \rho$ where $\rho$ is the expected number per unit time. Here the expected numbe per unit time is $1/6$ of an arrival per second, so the expected number in $15$ seconds is $5/2$. Hence the probability of $k$ arrivals in a $15$ second interval is: $<br /> f(k,2.5) = \frac{2.5^k e^{-2.5}}{k!}<br />$ so if $k=0$, this is: $<br /> f(0,2.5) = \frac{2.5^0 e^{-2.5}}{0!}=e^{-2.5}\approx 0.082<br />$ RonL « Help | Z Value »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.891211986541748, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=497780
Physics Forums Recognitions: Gold Member Science Advisor ## Why using quantum group SUq(2) instead of SU(2) makes sense? Increasingly QG is being done with a q-deformed symmetry group: replacing SU(2) by the quantum group SUq(2). What do you see as the intuitive basis for this? One intuitive justification is that in a universe with a minimum measurable size and an a maximum distance to horizon there is an inherent uncertainty in determining angle. A minimal angular resolution---which SUq(2) nicely implements. The argument is simple and is worked out in this 2-page paper. http://arxiv.org/abs/1105.1898 A note on the geometrical interpretation of quantum groups and non-commutative spaces in gravity Eugenio Bianchi, Carlo Rovelli (Submitted on 10 May 2011) Quantum groups and non-commutative spaces have been repeatedly utilized in approaches to quantum gravity. They provide a mathematically elegant cut-off, often interpreted as related to the Planck-scale quantum uncertainty in position. We consider here a different geometrical interpretation of this cut-off, where the relevant non-commutative space is the space of directions around any spacetime point. The limitations in angular resolution expresses the finiteness of the angular size of a Planck-scale minimal surface at a maximum distance $1/\sqrt{\Lambda}$ related the cosmological constant Lambda. This yields a simple geometrical interpretation for the relation between the quantum deformation parameter $$q=e^{i \Lambda l_{Planck}^2}$$ and the cosmological constant, and resolves a difficulty of more conventional interpretations of the physical geometry described by quantum groups or fuzzy spaces. Comments: 2 pages, 1 figure The cosmo constant is a reciprocal area: a handle on it is the distance $1/\sqrt{\Lambda}$ which is easy to calculate by typing this into the google window: c/(70.4 km/s per Mpc)/sqrt 3/sqrt 0.728 PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Science Advisor A couple reasons before this: 1) The Ponzano-Regge spin foam model of 3D gravity was first regularized by going to quantum groups in the Turaev-Viro state sum model. The TV model seems to have positive cc compared to the Ponzano-Regge model. 2) As Physics Monkey pointed out in http://www.physicsforums.com/showpos...72&postcount=7, the Levin-Wen models (which are somehow related to the TV model) use quantum groups. Spin foams have a long history of thinking of gravity as being a constrained BF theory, and so related to TQFTs, which the TV and Levin-Wen models are. Rovelli still believes his current model is some sort of TQFT http://arxiv.org/abs/1010.1939 . Hellmann's http://arxiv.org/abs/1105.1334 has very interesting comments on p8 about how triangulation independence, which is characteristic of state sum TQFTs, is supposed to be achieved. Recognitions: Science Advisor I am still not convinced that this algebraic setup for the cc (via the deformation parameter q) makes sense b/c we know from the asymptotic safety program that the cc is subject to renromalization. So there are two competeing ideas how to introduce the cc - as something very special via the q-deformation - as an ordinary coupling constant I still do not see how these two approaches can be related. Recognitions: Science Advisor ## Why using quantum group SUq(2) instead of SU(2) makes sense? Quote by tom.stoer I am still not convinced that this algebraic setup for the cc (via the deformation parameter q) makes sense b/c we know from the asymptotic safety program that the cc is subject to renromalization. So there are two competeing ideas how to introduce the cc - as something very special via the q-deformation - as an ordinary coupling constant I still do not see how these two approaches can be related. But if, as you have said, LQG should get unification, then the current interpretations of the formalism as geometry should be wrong anyway. In his version of dS/CFT, David Lowe used the quantum deformation of the isometry group of de Sitter space, because he wanted a finite-dimensional Hilbert space, because the cosmological horizon of an observer in de Sitter space has a finite entropy. Recognitions: Science Advisor Quote by atyy But if, as you have said, LQG should get unification, then the current interpretations of the formalism as geometry should be wrong anyway. No, not really. The current formalism is not geometry (neither today nor in future), but geometry emerges in a certain semiclassical limit (already today). Using SUq(2) could be a way to let several aspects emerge simultaneously - geometry, particles, interactions, ... This is a fascinating idea and I am really a fan - but I don't see how to this can be harmonized with renormalization. Recognitions: Gold Member Science Advisor Quote by tom.stoer I am still not convinced that this algebraic setup for the cc (via the deformation parameter q) makes sense b/c we know from the asymptotic safety program that the cc is subject to renromalization. So there are two competeing ideas how to introduce the cc - as something very special via the q-deformation - as an ordinary coupling constant I still do not see how these two approaches can be related. I don't either, but perhaps it is possible that they might be related. This paper gives some suggestions of how that might work, on an intuitive level. http://arxiv.org/abs/1105.1898 It interprets the deformation parameter as something that could run. The group deformation q is a function of the finest angular resolution φ = Lmin/Lmax the ratio of the two distance scales. In the limit as φ --> 0, the deformation q --> 1 and the quantum group SUq(2) goes to the ordinary SU(2). One might say the idea of the finest angular resolution is more intuitively meaningful of the two. Or else that the difference between them is trivial, since they are so closely related: q = exp(i φ2) I suppose it is obvious that φ can change as the universe evolves. It can change even without Lmin changing, if Lmax does. With any definition I can think of, the latter does change over time. There is a distance called the cosmological event horizon CEH which is at present slowly increasing towards a limit currently estimated to be about 16 billion lightyears. Roughly speaking the asymptotic value of the CEH is 1/√Λ, a distance you see being used in the paper. But more precisely using standard model cosmology the limit is √(3/Λ). There is a rogue factor of √3 which gets in there. So even if √Λ would remain constant, the Lmax would change over time. That is if we take it to be the CEH---which is the bound on how close a galaxy must be if it can, today, send us a message which will eventually reach us. If a galaxy is closer than the cosmological event horizon then whatever events happen there today, we will eventually get the news. If it is farther than that, we will never get the news. Most people here will know the CEH, but I put it in for completeness in case some do not. Even with Λ constant, that CEH distance is slowly increasing towards estim. 16 billion lightyears. And CEH used to be much smaller. At the time the microwave background light was emitted it was several orders of magnitude smaller. I would guess something like 40-some million lightyears, instead of the present 15 billion or so. I don't pretend to understand in what sense the cosmological constant Λ itself could run. None of this makes clear sense to me yet. But it seems to have possibilities. Regardless of what happens with Λ itself, there is clearly some play in the both the Planck length (related to Lmin) and in the CEH (related to Lmax). So intuitively there must be some play in their ratio, the finest angular resolution φ. The message of this short paper, in my view, is that we can think of the quantum group deformation simply as a measure of the finest angular resolution. (Perhaps it even has something to do with the "relative locality" line of investigation currently being pursued. This puts emphasis on the individual observer's momentum space and therefore necessarily on determining the incoming angles of whatever the observer is detecting/measuring.) If anyone is curious, the asymptotic value of the CEH can be gotten by putting this in google: c/(70.4 km/s per Mpc)/sqrt 0.728 in light years The numbers 70.4 and 0.728 are from the 7 year WMAP report. Recognitions: Gold Member Science Advisor Quote by tom.stoer No, not really. The current formalism is not geometry (neither today nor in future), but geometry emerges in a certain semiclassical limit (already today). Using SUq(2) could be a way to let several aspects emerge simultaneously - geometry, particles, interactions, ... This is a fascinating idea and I am really a fan - but I don't see how to this can be harmonized with renormalization. As you mentioned, we think from asymptotic safety that Lambda runs, so how does this fit with SUq(2)? How do the two approaches relate? I don't think there is any mismatch. If indeed Lambda runs then that must affect the maximum length scale Lmax. Then certainly the angle resolution bound must run and therefore also the deformation parameter q! If we assume Lambda runs then also must q, because Lambda places limits on measurement (the available spherical harmonics etc, as explained in the paper) so there is no incompatibility. Or so I think. Recognitions: Science Advisor Quote by marcus If indeed Lambda runs then that must affect the maximum length scale Lmax. Then certainly the angle resolution bound must run and therefore also the deformation parameter q! If we assume Lambda runs then also must q, because Lambda places limits on measurement ... so there is no incompatibility. The inability to show the presence of an incompatibility does not automatically proof its absence. Blog Entries: 30 http://arxiv.org/abs/1105.1898 “This geometrical picture resolves also a certain difficulty in interpreting the non-commutativity implied by a cut-off in the spins as related to the Planck scale fuzziness of physical space: the deformation parameter is dimensionless, and by itself does not determine a scale at which physical space becomes fuzzy.” Okayyyy! So, now, who can argue ... why this scale cannot be at 10^-18m? Blog Entries: 5 It seems strange to me to have the quantity $$l_{pl}^2 \Lambda = G \Lambda$$ turn up in an equation. Since in the action we always have the cosmological constant in the form $$\frac{\Lambda}{G}$$ This is the energy density that must couple to space-time in order to form a dimensionless quantity. Or just think of the Einstein equations $$G_{\mu \nu} + \Lambda = 8 \pi G T_{\mu \nu}$$ One would have to multiply through by G to get $$G \Lambda$$ turn up but this is arbitrary to some extent. It seems to imply that the Planck length has been put in by hand and some stage in the construction. Maybe someone knows at what stage and why we get this combination of the Planck length and the cosmological constant? Recognitions: Gold Member Science Advisor Quote by tom.stoer The inability to show the presence of an incompatibility does not automatically proof its absence. That's right! I spoke carelessly there. I should have said, not that there is no incompatibility but that there was no incompatibility shown. AFAIK it not been proven that there is a natural compatibiity between letting the group be deformed a variable amount into SUq(2), on the one hand, and treating Lamba as a running coupling constant, on the other hand. I have no idea how or if that would work out. Recognitions: Gold Member Science Advisor Quote by Finbar ... Maybe someone knows at what stage and why we get this combination of the Planck length and the cosmological constant? I think it is the natural way to get a dimensionless quantity involving Lambda. Lambda, as I understand it, is always a reciprocal area. So to get a dimensionless quantity one must multiply it by an area. So one multiplies by the Planck area. (What else ?) Just kidding, but you can see where it comes in already in equation (2) Blog Entries: 5 Quote by marcus I think it is the natural way to get a dimensionless quantity involving Lambda. Lambda, as I understand it, is always a reciprocal area. So to get a dimensionless quantity one must multiply it by an area. So one multiplies by the Planck area. (What else ?) So basically by choosing to measure all quantities in Planck units. Hmm thinking about it should come somehow when one computes the entropy of the cosmological horizon. Since the area of the cosmological horizon is the reciprocal of Lambda. Recognitions: Science Advisor Quote by tom.stoer No, not really. The current formalism is not geometry (neither today nor in future), but geometry emerges in a certain semiclassical limit (already today). Using SUq(2) could be a way to let several aspects emerge simultaneously - geometry, particles, interactions, ... This is a fascinating idea and I am really a fan - but I don't see how to this can be harmonized with renormalization. I don't understand how the EPRL/FK semiclassical limit is going to work in the q-deformed case. Is there still a J->infinity limit, or does J have a maximum value now? Or if q has to be taken to zero for the limit, then the cosmological constant disappears in the semiclassical limit? Actually that sounds right, given your idea that the cc should arise naturally from some sort of renormalization flow. Recognitions: Science Advisor Carfora, Maezuoli and Rasetti's review of quantum tetrahedra http://arxiv.org/abs/1001.4402 begins with a great comment inspired by "Nescit Labi Virtus (Virtue cannot fail)". Recognitions: Science Advisor Quote by atyy I don't understand how the EPRL/FK semiclassical limit is going to work in the q-deformed case. Is there still a J->infinity limit, or does J have a maximum value now? Or if q has to be taken to zero for the limit, then the cosmological constant disappears in the semiclassical limit? Actually that sounds right, given your idea that the cc should arise naturally from some sort of renormalization flow. The semicallsical limit is an interesting aspect; honestly, I can't see how a q = zero or a j = infinity limit could make sense. It's not my idea that the cc could arise from renromalization. It's just an idea that the non-gaussian fix point in AS gives us a non-zero cc, but I do not see how to get a running cc (which means a running q!) for q-deformation. Thread Tools | | | | |-----------------------------------------------------------------------------------|----------------------------------|---------| | Similar Threads for: Why using quantum group SUq(2) instead of SU(2) makes sense? | | | | Thread | Forum | Replies | | | General Math | 13 | | | Classical Physics | 3 | | | Precalculus Mathematics Homework | 10 | | | General Discussion | 16 | | | General Discussion | 9 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9238139986991882, "perplexity_flag": "middle"}
http://gilkalai.wordpress.com/2009/08/01/test-your-intuition-8/?like=1&source=post_flair&_wpnonce=c1ce40f6e1
Gil Kalai’s blog ## Test Your Intuition (8) Posted on August 1, 2009 by Consider all planar sets  A with constant width 1. Namely, in every direction, the distance between the two parallel lines that touch A from both sides is 1. We already know that there exists such sets other than the circle of radius 1/2. Among all these sets which one has the largest perimeter? the smallest perimeter? ### Like this: This entry was posted in Convexity, Test your intuition and tagged Test your intuition. Bookmark the permalink. ### 14 Responses to Test Your Intuition (8) 1. D. Eppstein says: The length of a curve is proportional to the expected number of its crossings with a uniformly random line. Each constant-width curve has the same expected number of crossings — for any slope of the random line, it meets the lines with that slope in an interval of the same length — so they all have the same perimeter. 2. HighSchool says: Strictly speaking, Eppstein’s argument is false, given that Gil asked about planar *sets*. Take, for instance, an annulus of outer radius 1 and inner radius 1/2. The perimeter is certainly larger than for the disk of radius 1. 3. Gil Kalai says: David’s answer is right and beautiful but it deserves to be elaborated a little more. 4. mpetrache says: The maximum perimeter is infinity if you don’t ask the figure to be convex. I guess for convex figures the infimum is assumed and independent of the figure, like in David Eppstein’s guess. To prove that I would “rotate the figure inside the 2 surfaces” and write some differential equation to control that, like in the proof of Kepler’s law. 5. D. Eppstein says: Ok, fine, here’s a somewhat more detailed explanation. One may define a measure on the space of lines in the plane that is invariant under rotation and translation; one way to do this is to parametrize a line by the pair (ρ,θ) where θ is the angle of the line and ρ is its (signed) distance from the origin, and use the uniform measure of points on a plane with Cartesian (not polar!) coordinates (ρ,θ). I think I first learned about this measure in a 1986 message to sci.math by William Thurston, but of course it goes back much farther than that and is um, well-known among the people who know this sort of thing. For convenience, let’s call the plane where the lines live “primal” and the plane having (ρ,θ) as Cartesian coordinates “dual”. This measure is obviously rotation invariant (a rotation of the primal plane is just a vertical translation in the dual plane). It’s also translation invariant: a translation of the primal plane causes the ρ coordinates of all lines with angle θ to shift by the same amount, so any set in the dual plane is transformed into another set with the same measure in each horizontal slice, and by Cavalieri’s principle the whole set the same measure. For a unit-length line segment pq, let L(pq) denote the set of lines that intersect pq. It follows from translation and rotation invariance that the measure of L(pq) is a constant independent of how pq is placed in the plane, and we may as well normalize the measure so that this constant is 1. It then follows by a limiting argument that, for any curve C, the multiset L(C) of lines that intersect C (with multiplicity equal to the number of intersection points) has measure equal to the length of C. If C is the boundary of a constant-width curve with unit width, then in the dual (ρ,θ) plane the dual points corresponding to lines in L(C) form a set whose horizontal slices form length-1 intervals, the same as for a unit circle. The endpoints of each of these horizontal slices have multiplicity 1 in L(C) and the interior points have multiplicity 2, the same as for a circle again. By Cavalieri’s principle again, L(C) has the same measure as L(circle), so C has the same length as the circle. In my earlier comment I used probability rather than measure: to define a uniform probability on the set of lines one has to restrict them to a subset of bounded measure, say the lines intersecting a disk big enough to contain all constant-width curves. A nice way to generate uniformly random lines that cross this disk is to pick two uniformly random points on its boundary circle and form the line connecting them. GK: Many thanks David! 6. Lior says: Here’s the variant of the proof that occurred to me: fix a “basepoint” inside the shape and parameterize the boundary in polar co-ordinates: say for each angle $\theta \in [0,2\pi]$ the boundary is at distance $r(\theta)$. Assuming the boundary is piecewise reasonably smooth, the length of the boundary is given by \$$\int_0^{2\pi} r(\theta)d\theta$\$. Finally, use $r(\theta) + r(\theta+\pi) = D$ (the fixed diamater) to see that the integral is $\pi D$. 7. Lior says: By the way, doing the same calculation in $R^n$ (integrating $r(\theta)^{n-1}$ over the unit sphere) seems to indicate that in general the surface area of bodies of constant width would not be constant. 8. Pingback: Buffon’s Needle and the Perimeter of Planar Sets of Constant Width « Combinatorics and more 9. Anonymous Rex says: Lior, $\int_0^{2\pi} r(\theta)d\theta$ does not compute the length of the boundary. See http://en.wikipedia.org/wiki/Arclength#Finding_arc_lengths_by_integrating for a correct formula. 10. Omar says: Another proof: (A-A)/2 is a centrally symmetric figure with constant width 1 and with the same perimeter as A. The only centrally symmetric figure with constant width is the circle, so all sets A have the same diameter. 11. Gil Kalai says: Dear Omar, what I see at once is that (A-A)/2 is centrally symmetric. I do not see the other two two properties… 12. Omar says: Dear Gil, 1) The width of the Minkowski sum of two planar convex figures in a certain direction is the sum of the width of the two figures in that direction. In fact, the extreme point(s) of the sum in a certain direction is (are) the sum of the extreme point(s) of the two figures in that direction. 2) The perimeter of the Minkowski sum of two planar figures is the sum of the perimeters of the two figures. This is easy for polygons and follows by approximation for any convex planar figure. For two convex polygons with m and n sides, respectively, the sum has m+n sides where each of the lentgth of the sides of the two polygons occurs exactly once. Actually, some of the m+n sides may collapse to one single side in singular cases, but then the length will add up. 13. Gil Kalai says: Dear Omar, this is a very nice proof. Thanks! 14. Pingback: Test Your Intuition (10): How Does “Random Noise” Look Like. « Combinatorics and more • ### Blogroll %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9112908840179443, "perplexity_flag": "head"}
http://mathoverflow.net/questions/62340/propositions-equivalent-to-the-completeness-of-the-real-numbers
## Propositions equivalent to the completeness of the real numbers ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Can anyone point me to a reasonably comprehensive article (or book chapter) explaining which basic theorems of calculus are equivalent to the completeness axiom of the reals and which ones aren't? Here "equivalent" means equivalent relative to a base system that includes all the ordered field axioms, plus naïve set theory, plus (optionally) the Peano axioms (which one probably needs if one wants to use the natural numbers as an index-set, e.g. in the Nested Intervals Property). At first I thought reverse mathematics would be the place to look, but a little bit of poking around now leads me to think that reverse mathematics in the usual sense deals with more arcane issues, with base systems that are at once weaker and stronger than what I have in mind: Konig's infinity lemma isn't provable in all of them, but the Intermediate Value Theorem is. (Stephen Simpson, in his Wikipedia article http://en.m.wikipedia.org/wiki/Reverse_mathematics, writes: "... RCA0 is sufficient to prove a number of classical theorems which, therefore, require only minimal logical strength. These theorems are, in a sense, below the reach of the reverse mathematics enterprise because they are already provable in the base system. The classical theorems provable in RCA0 include: ... Basic properties of the real numbers (the real numbers are an Archimedean ordered field; any nested sequence of closed intervals whose lengths tend to zero has a single point in its intersection; the real numbers are not countable). ... The intermediate value theorem on continuous real functions.".) So, reverse mathematics may not be the place to turn for answers to questions like "Is the completeness of the reals equivalent to the Mean Value Theorem?" (answer: yes); but I'm sure someone has considered such questions systematically. Perhaps somebody wrote a beautiful Monthly article a few decades ago that explained things so clearly as to make the whole matter seem trivial, with the result that the article was forgotten? :-) - My omission of the Archimedean axiom from the base theory was intentional. Indeed, one way to show that various theorems of calculus do NOT imply completeness is to show that they are satisfied by non-Archimedean totally ordered fields. – James Propp Apr 19 2011 at 21:29 This reference might help: math.hawaii.edu/~tom/mathfiles/rolle_Illinois.pdf Of course you can also look for more recent articles that cite it. – David Feldman Apr 20 2011 at 0:37 T. W. Körner, A Companion to Analysis discusses some of this. – lhf Apr 20 2011 at 10:34 1 This question is difficult to answer because it is too vague. First, there are two different axioms of completeness for the reals: Cauchy completeness and Dedekind completeness. I suspect you mean the latter, but this is not clear from the question. Second, the base theory you describe is unclear. What kind of comprehension axioms does your "naïve set-theory" comprise? Are the (optional!) natural numbers distinguished elements of the field or are they a separate sort? How are functions coded in this theory? – François G. Dorais♦ Jul 15 2011 at 15:19 1 François is correct in surmising that when I wrote "completeness" I meant "Dedekind completeness". As for what base theory I am presupposing, François answered this question very convincingly in the thread mathoverflow.net/questions/71344/… : it's second-order logic with standard semantics. – James Propp Jul 27 2011 at 14:37 show 3 more comments ## 2 Answers Since the article I was looking for doesn't seem to exist, I decided to write one myself; the current draft can be found at http://jamespropp.org/reverse.pdf . Comments are welcome! - 1 Very interesting! What would be wonderful is a chart with arrows giving the implications between the various properties. – Qiaochu Yuan Jul 26 2011 at 19:30 I am having trouble in understand the exact theory that you are working in. It seems to me like a modification of the theory RCF (the theory of real closed fields which is complete for the first order properties of real numbers in the language of ordered fields), it would be nice to state the exact language and axioms of the theory in one place. – Kaveh Jul 26 2011 at 20:08 You may also want to check the first volume of "Constructivism in Mathematics" by van Dalen and Troelstra where they develop a theory for elementary analysis and discuss the equivalence/non-equivalence of various analytical principles and definition (constructively). – Kaveh Jul 26 2011 at 20:11 I'll look into van Dalen Troelstra; thanks for the reference. Meanwhile, can you give me more information about the completeness of the theory RCF? This might resolve the issue about provability that I didn't know how to address at the bottom of page 2 and the top of page 3. – James Propp Jul 26 2011 at 21:59 The language of RCF is the language of ordered rings. The axioms are the axioms for an ordered field plus axioms stating that every polynomial of odd degree has a root (IIRC). The theory is complete in the sense that for any true first order sentence in this language, either RCF proves it or RCF proves its negation. You can find more about RCF in model theory books like David Marker's. – Kaveh Jul 27 2011 at 0:22 show 4 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. EDIT: I noticed that I misunderstood the question after posting the answer, so this is not a answer to the question. I am leaving it here just in case it might be interesting to others. This is studied in bounded reverse math by people like Fernando Ferreira and colleagues. The base theory BTFA [Fer'94] is a two sorted theory version of Sam Buss's bounded arithmetic theory $S^1_2(\alpha)$ [Bus85, ch. 9] plus bounded collection/replacement for $\Sigma^b_\infty$ formulas ($B\Sigma^b_\infty$) plus a form of comprehension axiom for $\Delta_1$ sets ($\nabla^b_1CA$): $$\forall x (\forall z \ \varphi(x,z) \leftrightarrow \exists y \ \psi(x,y)) \Rightarrow \exists Z \ \forall x \ (x \in Z \leftrightarrow \exists y \ \psi(x,y))$$ where $\varphi$ and $\psi$ are respectively $\Pi^b_1$ and $\Sigma^b_1$ formulas. This is a modification of Simpson's axiom in his book [Sim'09]. Because of its special form the first order part is conservative over $S^1_2$ and is incapable of using the full power of comprehension for $\Delta_1$ sets. On the other hand, the second order part of the smallest model of the theory is $\Delta_1$ sets. In [FF'02, thm. 4], a version of the Intermediate Value Theorem is proven in BTFA. Some caution is needed here in formalizing the IVT. Also the proof is not constructive (either there is a rational number which is the root of the function or we can continue a process getting arbitrary close to a root. Deciding that a given rational number is not a root of the function is not decidable and this is required since we need to stop the process of dividing the current interval into two halves if we reach a root, i.e. we need this assumption so we have $f(m)<0 \ \lor \ f(m)>0$ where $m$ is the rational mid-point of the current interval). As far as I remember WKL is not provable in BTFA. See also [FF'05] and [FF'08]. References: 1. Fernando Ferreira, "A feasible theory for analysis", The Journal of Symbolic Logic 59, 1001-1011, 1994. 2. António Fernandes and Fernando Ferreira, "Groundwork for weak analysis", The Journal of Symbolic Logic 67, pp. 557-578, 2002. 3. António Fernandes and Fernando Ferreira, "Basic applications of weak König's lemma in feasible analysis", in "Reverse Mathematics 2001", edited by Stephen Simpson. Lecture Notes on Logic (Association for Symbolic Logic), vol. 21, pp. 175-188 (A K Peters, 2005). 4. Fernando Ferreira and Gilda Ferreira, "The Riemann integral in weak systems of analysis", Journal of Universal Computer Science, 14, no. 6, pp. 908-937 (2008). 5. Samuel R. Buss, "Bounded Arithmetic", Bibliopolis, Revision of 1985 Ph.D. thesis. 6. Stephen G. Simpson, "Subsystems of Second Order Arithmetic", Second Edition, Perspectives in Logic, Association for Symbolic Logic, Cambridge University Press, 2009. - This is a good answer, Kaveh. You should undelete it. There is also interesting work of Phuong Nguyen and Stephen Cook in this area that might be useful. – François G. Dorais♦ Jul 16 2011 at 6:40 I deleted my answer since these theories still satisfy "base systems that are at once weaker and stronger than what I have in mind: Konig's infinity lemma isn't provable in all of them, but the Intermediate Value Theorem is." – Kaveh Jul 27 2011 at 11:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9189371466636658, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/5449/combinatorial-results-without-known-combinatorial-proofs/5451
## Combinatorial results without known combinatorial proofs ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Stanley likes to keep a list of combinatorial results for which there is no known combinatorial proof. For example, until recently I believe the explicit enumeration of the de Brujin sequences fell into this category (but now see arXiv:0910.3442v1). Many unimodality results also fall into this category. Do you know of any other results of this kind, especially results that look frustratingly like they ought to have simple combinatorial proofs? For the purposes of this question, "combinatorial result" should be interpreted as meaning some kind of exact enumeration, and "combinatorial proof" should be interpreted as meaning, more or less, "bijective proof." (So for example I am not interested in bounds on Ramsey numbers.) - 1 Does enumerating Young tableaux of a given format count? Knuth volume 3 states that there is no simple proof for that result. – Zsbán Ambrus Oct 9 2010 at 10:38 @Zsbán: If you mean the hook-length formula, then there is now a proof by Novelli, Pak, and Stoyanovskii that is as good a bijective proof as one can hope for. – Timothy Chow Jan 7 2011 at 18:47 ## 12 Answers This is a very common theme in enumerative combinatorics. You can find a lot of examples with the Google search "no bijective proof" (with quotes). First, I can say something about why you might care about bijective proofs. Combinatorial species are certainly a nice theory, but they are a fairly specific and elaborate answer related to generating functions. A more general reason is that a bijective proof categorifies an equality in combinatorics to the category of sets. In other words, it promotes an equality $|A| = |B|$ to an isomorphism $A \cong B$. In my opinion, it is just as important to find any other categorification, for instance to the category of vector spaces. Instead of showing that two sets are the same size with a bijection, you would show that they are the same size using an invertible matrix. Second, of the many examples, I can name one that I encountered. This example is interesting because the objects in question seem very similar. Recall that an alternating sign matrix is a matrix whose non-zero entries in each row and column alternate between $1$ and $-1$, and such that the first and last non-zero entry in each row and column is $1$. One interesting subclass is the ASMs of order $2n+1$ which are symmetric about a vertical line. Another interesting subclass is the ASMs of order $2n$ which are diagonally symmetric and have 0s on the diagonal. (ASMs of either type of the opposite parity do not exist.) The first class was discovered by David Robbins and I found the second class. I proved David's product formula for the first class and I established the same product formula for the second class. So these two classes of ASMs are equinumerous, but no bijective proof is known. Here is another interesting example in the same vein. A cyclically symmetric, self-complementary plane partition (CSSCPP) is equivalent to a tiling of a regular hexagon of order $2n$ by unit lozenges, which is invariant under 60 degree rotation. Here a unit lozenge is two unit equilateral triangles stuck together. A totally symmetric, self-complementary plane partition (TSSCPP) is the same thing except with full dihedral symmetry. (I make the size even because otherwise there aren't any plane partitions with the imposed symmetry.) The formulas for both classes were also conjectured by David Robbins; George Andrews proved his conjecture for TSSCPPs and I proved the conjecture for CSSCPPs. In particular, the number CSSCPPs of a fixed size is the square of the number of TSSCPPs, but no one knows a good bijection. The single most striking thing that David Robbins found was that the number of TSSCPPs, which are plane partitions with full symmetry, equals the number of ASMs with no imposed symmetry. No bijective proof of that is known either. On the positive side, Doron Zeilberger's proof of the ASM conjecture, and his later paper on refined ASMs, could be steps towards one because they equate certain generalizations and refined enumerations. However, alternating-sign matrices look totally different from plane partitions. In my opinion, the most frustrating case is when we can't even match like to like. - Apropos of Greg's first point: there are such things as "linear species", which are functors from B = (finite sets + bijections) to Vect. Any linear species has a generating function, as long as it takes values in finite-dimensional vector spaces. So the theory of species allows you to categorify the theory of generating functions not just in the set-theoretic sense of categorification, but also in the linear sense. Also: I don't know why you use the word "elaborate": what could be simpler than the definition of species? – Tom Leinster Nov 15 2009 at 2:04 3 The theory of species is more elaborate than the definition of a species. For instance, the description of "basic" operations on species in Wikipedia could be called elaborate. I guess my real point is that species seem intended as a categorification of generating function methods, which is fine, but not all combinatorial objects fit well into that framework. – Greg Kuperberg Nov 15 2009 at 2:49 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This isn't exactly an answer, but since this is community wiki I hope it's in the spirit of things if I add a twist to the question. One of the excellent things about the excellent theory of species is that it has at its heart a notion of natural bijective proof. Let me sketch the basic idea. A species is simply a functor from the category $$\mathcal{B} = (\mbox{finite sets } + \mbox{ bijections})$$ to the category of sets. One thinks of a species as a way of decorating a finite set with some extra combinatorial structure. For example, there is a species $L$ defined by $$L(X) = \{ \mbox{linear orders on } X\}$$ for finite sets $X$ (and defined in the obvious way on morphisms). Thus $L(X)$ is the set of ways of "decorating" $X$ with a linear order. Or, there is another species $P$ defined by $$P(X) = \{ \mbox{permutations on }X\}$$ for finite sets $X$ (and defined in the obvious way on morphisms). You can think of species as categorified generating functions. More exactly, for any species $S$ that is finite (takes values in finite sets), you can form its exponential generating function $\sum_n s_n x^n/n!$, where $s_n$ is the cardinality of $S(X)$ for any $n$-element set $X$. By passing from a species to its generating function (decategorification), you lose some information. I'll give a non-trivial example of this in a moment. There's an obvious notion of isomorphism of species, namely, natural isomorphism of functors. Are the species $L$ and $P$ above isomorphic? We have $L(X) \cong P(X)$ for all $X$, since an $n$-element set admits both $n!$ linear orders and $n!$ permutations. But you can show that there is no natural isomorphism $L \cong P$. So no, $L$ and $P$ are not isomorphic. The intuition is this: in order to match up permutations and orders, you'd have to choose an order to correspond to the identity permutation; but an abstract finite set carries no canonical linear order, so you'd have to make a random choice. Hence there's no canonical correspondence between them. In particular, this implies that species with the same generating function ($\sum_n n! x^n/n! = 1/(1-x)$, here) need not be isomorphic. So yes, passing to the generating function can lose information. Moral: one notion of "bijective proof" is "existence of an isomorphism of species". It's quite a demanding notion, as the permutation/order example shows. One might consider compiling a list of all the pairs of species that have the same generating function but are not isomorphic. This list could usefully be compared to Stanley's list. - 5 On the other hand, I tend to think that most people would say there IS a "bijective proof" of the fact that the number of linear orders agrees with the number of permutations, though there's no canonical bijection. You might say that the former is a torsor for the latter; is there a good way to express this in species language? – JSE Nov 14 2009 at 3:07 7 I'm not used to talking about species, bu the fact that L is a torsor for P should be restatable as saying that LxL is canonically isomorphic to LxP. – Alison Miller Nov 14 2009 at 6:47 3 A (canonical) combinatorial proof that |L(X)|=|P(X)|: consider sets already decorated with a linear order. Then new (unrelated) linear orders on these sets are in canonical 1-1 correspondence with permutations taking the old order to the new one, so |L(X)|*#decorations = |P(X)|*#decorations, and dividing both sides by the number of decorations gives the result. – David Eppstein Nov 14 2009 at 8:29 7 This is a decategorification of Alison's definition of torsor above! – JSE Nov 14 2009 at 19:55 3 JSE and Alison: one way to express the torsor property is that the species LxL and LxP are isomorphic (in a particular way). All you have to do to prove this is check that some naturality squares commute. David: indeed, but note that in order to "divide both sides by the number of decorations [of X by linear orders]", you need to know that there exists a linear order on X. This is true, but you can't canonically exhibit one. – Tom Leinster Nov 15 2009 at 2:53 show 1 more comment Warning: this used to be a great example, but I'm afraid it no longer is. Let $H(n)$ be the number of horizontally-convex polyominoes in the plane, where "horizontally convex" means just what you think it means, and equivalence is just up to translations (so mirror images and rotations are considered distinct). Using a sequence of manipulations with two-variable generating functions and an amazing amount of cancellation, one finds that $H(n) = 5H(n-1) - 7H(n-2) + 4H(n-3)$. I learned this from Gil Kalai in 1991 (and the result is much older), and I'm quite sure there was no known combinatorial proof of this surprising result for a while. However fairly recently Dean Hickerson found one. I'm sure Dean thought that this looks frustratingly like something that ought to have a combinatorial proof, and then he proceeded to resolve this frustration in the only possible way. - For example, according to Stanley the identity $n \cdot \text{pp}(n) = \sum_{i=1}^{n} \sigma_2(i) \text{pp}(n-i)$ has no known bijective proof, where $\text{pp}(n)$ denotes the number of plane partitions of $n$. - IINM, there's also no known bijective proof of the fact that pp(n) is the number of partitions of n into 1 type of size 1, 2 types of size 2, etc. (Is that equivalent to the above recursion? Feeling too lazy to check right now.) Or equivalently that it's the number of functions on an n-set up to permutation of the underlying set. – Harry Altman Oct 8 2010 at 1:12 The number of (isomorphism classes of) self complementary graphs on n vertices is the difference between the number of graphs on n vertices with an odd number of edges and the number with an even number of edges. This is relatively easy to prove with counting arguments, but I'd love to have a combinatorial proof of this... - But surely "counting arguments" is combinatorial. Do you mean you want a bijective proof? – Emil Nov 16 2009 at 10:42 Yes, I wasn't really clear. The proof I know uses Polya theory and so you just set up the equations and the numbers magically come out the same on both sides. So it is just algebraic manipulation, rather than anything else. Therefore I'd like something bijective or at least a natural interpretation of this difference. – Gordon Royle Nov 16 2009 at 13:11 The Graham-Pollak Theorem states that the minimum number of complete bipartite graphs needed to disjointly cover the edge set of the complete graph on n vertices is n-1. The only known proofs all use linear algebra, and there is no pure counting proof is known as far as I know. There is a chapter about this in Aigner and Ziegler's Proofs from the BOOK. - Perhaps I haven't had enough coffee this morning. I certainly don't have access to Signer and Ziegler's book. Anyway, where is the linear algebra in the following? Base case n=1 : m(n)=0 follows by inspection as there are no edges. More generally, extract Ka,b from Ka+b, then m(a+b) is bounded above by (and must be equal for at least one pair a,b) 1 + m(a) + m(b), giving the additional result of equality using any pair (a,b). If the linear algebra is hiding in the addition, I don't see it. – Gerhard Paseman Mar 24 2012 at 17:47 I should insist on a and b being positive integers as above. Gerhard "I Forgot My Signature Too!" Paseman, 2012.03.24 – Gerhard Paseman Mar 24 2012 at 17:50 I believe the linear algebra part isn't showing that $m(n)≤n−1$, but showing that you can't do it with fewer. – Kevin P. Costello Mar 24 2012 at 20:30 Yes, but the cool part about the above sketch is that there is a minimum decomposition which is of the form suggested above, with minimum value 1 + m(a) +m(b), and strong induction carries the day. Again, where is the linear algebra? Or do I need to be more explicit about the lower bound? Gerhard "Ask Me About System Design" Paseman, 2012.03.24 – Gerhard Paseman Mar 25 2012 at 2:59 Ah. I get it now. I cannot prove yet that one of the pieces in a minimal decomposition must be of the form Ka,b when I start with the complete graph on a+b vertices. Although I might be able to prove that by an edge count, I do not see it yet. OK. Gerhard "Back To The Drawing Board" Paseman, 2012.03.24 – Gerhard Paseman Mar 25 2012 at 3:02 show 1 more comment The following statement seems to not have clear combinatorial proof (or at least it did not in 2003, when I heard of it): Denote by L(n) the set of all partitions of n into distinct parts with the smallest part being odd. Let L_o(n), L_e(n) be the subsets of L(n) consisting of partitions into odd and even number of parts respectively. Then |L_o(n)|-|L_e(n)| is 0 if n is not a perfect square, and is (-1)^n if n is a perfect square. - COuld you please clarify this statement? To me, a "partition into even parts" would be a partition whose parts are even, but there are not many of those whose smallest part is odd. – Hugh Thomas Nov 14 2009 at 17:46 @Hugh: edited: what I meant was "partitions into odd/even number of parts". Sorry - today is my official misprint day. – Vladimir Dotsenko Nov 14 2009 at 18:36 @Vladimir: thanks. I wondered if that was what you meant, but couldn't get it to work our correctly for n=5. The partitions in L(5) into an even number of parts seem to be 41, 2111; while those with an odd number of parts seem to be 5, 311, 221, 11111. Can you explain? – Hugh Thomas Nov 15 2009 at 14:56 @Hugh - I hope it's finally correct - with partitions of distinct parts. For example, in the case of 5 what we have is 41 vs 5, in the case of 6 - 51 vs 321, in the case of 7 - 7 and 421 vs 61 and 43 etc. – Vladimir Dotsenko Nov 16 2009 at 15:26 PROBLEM Of splitting a necklace between two thieves: Two thieves want to share equally the stones of a necklace ( an open circle). The necklace has $s$ types of stones ( each type of stone appears an even number of time.). They want to minimize the number of cuts ( the link are costly and they do not want to make a mess of it). Show that it is always possible to achieve the split using $s$ cuts. SOLUTIONS: For $s=2$ a combinatorial solution is not too difficult. For any $s$, a topological/linear algebra proof exists ( a nice exposition by Jiri Matousek in reference below.) http://www.amazon.com/Using-Borsuk-Ulam-Theorem-Combinatorics-Universitext/dp/3540003622 Though by now there seem to be a combinatorial proof. AT : http://www.combinatorics.org/Volume_16/PDF/v16i1r79.pdf Yet I believe it might be of interest as a problem that had no combinatorial proof for a while. - One result I like is that the number of 321-avoiding permutations of length 2n whose matrices are 180°-symmetric is (2n choose n). The best proof I know is fairly short, but I wouldn't call it bijective: Under the Robinson-Schensted correspondence, the 180°-symmetric permutations are exactly the ones which map to ordered pairs of self-evacuating tableaux, which are in turn in bijection with ordered pairs of domino tableaux in the same shape. (See Stembridge) Now, if you look at the 2-row (since our permutations must be 321-avoiding) domino tableaux of size 2n, there are n+1 Ferrers shapes they can take, and they can be formed from those of size 2n-2 in a way satisfying the relation in Pascal's triangle, so the sum over all 2-row Ferrers shapes of the square of the number of domino tableaux of that shape is the sum of the squares of the binomial coefficients (2n choose i), yielding (2n choose n). I've tried to "unpack" each of these steps into a simple bijection, but nothing's budged. Still, it seems like the kind of problem that someone else might be able to solve. - The book Proofs that Really Count: The Art of Combinatorial Proof, by Art Benjamin and Jenny Quinn, contains a large number of combinatorial identities with no known combinatorial proof. (See the end of most of the chapters.) As the subtitle indicates, it is also a great reference for those interested in combinatorial proof techniques. - Atiyah conjecture for free groups. It has been proved by Peter Linnel using some operator-algebraic technique, but the statement seems to me to be ultimately combinatorial. For example, as an important special case there is the analytical 0-divisors conjecture: Let T be a self-adjoint element of the complex group ring of a free group of $l^2$-norm at most 1. Consider the sequence $t_n$ of complex numbers: $t_n$ is the coefficient of the neutral element of the element $(1-T)^n$ of the group ring (so this is a combinatorial thing.) One of the formulations of the analytical 0-divisors conjecture is the following theorem. Theorem (P. Linnel): If T is not 0 then the limit of the sequence $t_n$ is 0. Similarly for many other groups for which Atiyah conjecture is known. For example, the proof for elementary amenable group (again Linnel) uses deep K-theory, but admittedly it might be that this deep K-theory is proven using combinatorics. - Some "positivity results" can only be proven using heavy geometric machinery at the moment. The most prominent one is positivity of the coefficients of Kazhdan-Lusztig polynomials (in the case of Weyl groups). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372096061706543, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/246590/evaluate-mathop-lim-limits-x-to-infty-left-int-limits-0-pi-6
# Evaluate $\mathop {\lim }\limits_{x \to \infty } {\left( {\int\limits_0^{\pi /6} {{{(\sin t)}^x}dt} } \right)^{1/x}}$ It is given that the following limit $\mathop {\lim }\limits_{x \to \infty } {\left( {\int\limits_0^{\pi /6} {{{(\sin t)}^x}dt} } \right)^{1/x}}$ exists. Evaluate the limit. I've tried tackling this problem but I can't seem to get started. Any hint is appreciated, thanks! - ## 1 Answer Note that for every $x>\frac{6}{\pi}$, when $\frac{\pi}{6}-\frac{1}{x}\le t\le \frac{\pi}{6}$, $0<\sin\left(\frac{\pi}{6}-\frac{1}{x}\right) \le \sin t\le \frac{1}{2}$. It follows that $$\sin\left(\frac{\pi}{6}-\frac{1}{x}\right)\left(\frac{1}{x}\right)^{\frac{1}{x}}\le\left(\int_0^{\frac{\pi}{6}}(\sin t)^x dt\right)^{\frac{1}{x}}\le \frac{1}{2}\left(\frac{\pi}{6}\right)^{\frac{1}{x}}.$$ Letting $x\to\infty$, it follows that $$\lim_{x\to\infty}\left(\int_0^{\frac{\pi}{6}}(\sin t)^x dt\right)^{\frac{1}{x}}=\frac{1}{2}.$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478955268859863, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/hydrogen+homework
# Tagged Questions 1answer 83 views ### Finding the wavelength of an electron in its ground state? To find the wavelength of an electron in its ground state in a hydrogen atom, would I or could I do the following? Use the ground state energy (-13.6eV) in $E^2 = m^2c^4 + p^2c^2$ Solve for $p$ Use ... 2answers 174 views ### Technical detail in the solution of the hydrogen atom I'm trying to do an exercise in which you solve the Schrödinger equation for the hydrogen atom. Through the exercise, I've already shown that the wavefunction is: \psi_{n\ell m}(r,\theta,\varphi) ... 2answers 468 views ### Wave function of Hydrogen Atom Wavefunction of a Hydrogen atom is expressed in eigenfunctions as: $$\psi(\boldsymbol r,t=0)=1/\sqrt{14}(2\psi_{100}(\boldsymbol r)-3\psi_{200}(\boldsymbol r)+\psi_{322}(\boldsymbol r) ).$$ Is ... 1answer 180 views ### Electric dipole transitions/expectation value of position Part of a homework question asks to show that for $\ell=0$ in both $\Psi_i$ and $\Psi_f$, we have $$\int \Psi_i^\ast \vec{r} \Psi_f \; d\tau = 0$$ for the position vector $\vec{r}$. (This is for ... 0answers 300 views ### Square of Laplace–Runge–Lenz vector in Hydrogen atom [closed] I have a problem. I've tried this question, but I don't get the correct expression. Can someone give me some ideas? Thanks! Consider the Hydrogen Atom Hamiltonian: H = (\mathbf p^2/2 ... 0answers 307 views ### Evaluating Transition probability between different states of Hydrogen atom I am trying to evaluate the inner product $<2S_{\frac{1}{2}},F',F'_{3}|\delta^{3}(x)\sigma_{i}P_{i}|2P_{\frac{1}{2}},F,F_{3}>$ It's written in the form $<nl_{j},F,F_{3}|$ Where ... 1answer 376 views ### Plotting Hydrogen's $2P_{x,y,z}$ Probability Densities in MATLAB [closed] I have spent an unreasonable amount of time trying to plot $F(r,\theta,\phi)$ plane slices in MATLAB. I want to look at $x-y,y-z,x-z$ planes. Here's the function, specifically: ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267077445983887, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/7317/understanding-black-scholes
# Understanding Black-Scholes Assume I have only basic math knowledge, what specific areas of math would I need to learn in order to understand the following webpage: Black-Scholes Many thanks. - 1 How "basic" are we talking? Certainly you would have to know PDEs and stochastic calculus to grok Black-Scholes. – J. M. Oct 20 '10 at 17:26 4 Short answer: Quite a lot! (Partial differential equations, random processes, etc. And to learn that, you need a very solid foundation in calculus, probability, etc.) – Hans Lundmark Oct 20 '10 at 17:39 ## 5 Answers The standard low technology argument for Black-Scholes (the famous "binomial tree") requires only basic material, though there is also a standard medium technology approach using stochastic calculus (informally) and an advanced approach using the rigorous mathematical apparatus of stochastic processes, Brownian motion, and diffusion equations. I think summation of a finite geometric series, very basic probability, and knowledge of how to take a limit of the binomial distribution to get the Gaussian normal distribution, is enough for the binary tree approach. This is in almost every elementary text on options pricing, such as the ones used in business schools or quant basic training, where by elementary I mean not starting immediately from stochastic calculus. (This relatively simple derivation applies to the Black-Scholes formula for pricing the simplest types of options and, in some developments of the theory, also to the Black-Scholes model giving a more general formalism for pricing arbitrary European options as expected destinations of a random walk. The binomial tree is not ordinarily presented as a method for producing the Black-Scholes partial differential equation satisfied by prices in their model, although in theory it could do that. The differential equation is usually arrived at by a simple heuristic argument using stochastic calculus (given in all finance books that introduce stochastic calculus). A fully rigorous derivation is quite complicated and does require a lot of mathematical background, but this is not necessary for understanding the formula and its use in finance. An elementary mathematical derivation of the differential equation not explicitly using the binomial tree was posted at Terry Tao's webpage, though it does not dwell on financial intuitions or interpretations.) The problem can be discretized, as pricing of an option to buy a security whose value goes up and down in an (exponentiated) random walk, where there are a finite number of time steps and at each step the price is multiplied by $r$ or $1/r$ with a constant probability of up- or down- motion in the price at each step. Taking a limit you get the Black-Scholes formula. The key point is to consider a single time step and show that the price is determined by the relationship between the payoffs for the security and for the option. The rest is a matter of algebraically propagating this known answer back up through the binary tree (the $2^n$ possible paths for the price of the security between time 0 and time $n$) until you reach the root. This gives the answer for the finite problem, then taking the continuous limit of the problem "derives" Black-Scholes for the continuous time case. To show that the continuous time theory makes sense mathematically involves more than just suggesting that such a theory might exist as a limit of a well-defined discrete theory. For that you need to construct the limit theory directly, which requires the machinery of stochastic analysis. To understand the limitations of the finite or continuous theory requires knowledge of finance in addition to the mathematics. [References added: low-tech approach (Cox-Ross-Rubinstein binomial trees): http://en.wikipedia.org/wiki/Binomial_options_pricing_model ; medium-tech is in most introductory books on derivatives pricing that cover stochastic calculus -- Hull and its competitors; high-tech is in books on stochastic PDE such as Oksendal or advanced finance books like Shreve. ] - 3 – Rahul Narain Oct 20 '10 at 19:20 Dear T, I have already read about Black-Scholes in passing, but have never seen the treatment of it being related to a binary tree. May I know what book(s) I can peer at for this? – J. M. Oct 21 '10 at 0:34 @J.M. : posting now expanded and with references. – T.. Oct 21 '10 at 1:07 Thanks a lot, T; a shame I cannot upvote an answer twice... – J. M. Oct 21 '10 at 1:18 – Jyotirmoy Bhattacharya Oct 21 '10 at 3:33 show 1 more comment I found this book quite useful. But it does take the background listed. The basic result that the equation is the same as one dimensional heat flow and the graphs of roughly how the value flows are not too tough. There are also free calculators on the web that let you play with it. - Take a look at the syllabi for the first three actuarial preliminary examinations. If you study for these tests, in order, you will learn everything you need in order to understand and apply the Black-Scholes formula. Each exam typically requires three hundred hours of study to do well. If you pace yourself, and you take one exam every six months, you will be studying 11 to 12 hours per week for one and a half years to master this material. Of course, you could cover this material much faster, if you're more interested in breadth than depth. The first three exams are 2.5 to 3 hours each. They are 1. 1/P - The Probability Exam - A thorough command of calculus and probability topics is assumed. 2. 2/FM - Financial Mathematics Exam - Interest theory (discrete and continuous) and an introduction to derivative securities. 3. 3F/MFE - Financial Economics Exam - Interest rate models, rational valuation of derivative securities, simulation, and financial risk management techniques. The syllabi list the textbooks and papers you should study. Additionally, there are study guides that cover each test. Exam 3F/MFE covers Black-Scholes. Specifically, you must be able to • Calculate the value of European and American options using the Black-Scholes option-pricing model. • Interpret the option Greeks. • Explain the properties of a lognormal distribution and explain the Black-Scholes formula as a limited expected value for a lognormal distribution. - Well, it depends if you want to understand esp. this derivation or an derivation (so what is going on there). The easiest and most intuitive derivation I have seen, ever - is this one: Intuitive Proof of Black-Scholes Formula Based on Arbitrage and Properties of Lognormal Distribution by Alexei Krouglov
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224985241889954, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/265105/how-can-i-find-maximum-and-and-minimum-values-of-fx-y-xye-xy
# How can I find maximum and and minimum values of $f(x,y)=xye^{-(x+y)}$? How can I find maximum and and minimum values of $f(x,y)=xye^{-(x+y)}$ in the region $(x-1)^2+(y-1)^2 \leq4$ ? - ## 3 Answers First look for critical points of $f$ in the interior $(x-1)^2 + (y-1)^2 < 4$. Do this by comparing the partial derivatives of $f$ to zero. Next search for critical values on the boundary $(x-1)^2 + (y-1)^2 = 4$ using Lagrange multipliers. Compute $f$ in these points and, by putting everything together, you will find the maximum and minimum of $f$ in the region. EDIT: Computing the derivatives: $$f_x(x,y) = x(1-y)e^{-(x+y)} \\\\ f_y(x,y) = y(1-x)e^{-(x+y)}$$ Comparing both to zero we get: $$x(1-y) = 0 \\\\ y(1-x) = 0$$ Simultaneous solutions are $(0,0)$ and $(1,1)$, which are both within the interior (but as we will see these points don't matter anyway). Computing $f$ at these points gives $0$ and $e^{-2} \approx 0.135$ correspondingly. Now to the boundary. It is given by $g(x,y) = 4$ where $g(x,y) = (x-1)^2 + (y-1)^2$ and the partial derivatives: $$g_x(x,y) = 2(x-1) \\\\ g_y(x,y) = 2(y-1)$$ Using the Lagrange multipliers method, we have to solve $$g(x,y) = 4 \\\\ f_x(x,y) - \lambda g_x(x,y) = 0 \\\\ f_y(x,y) - \lambda g_y(x,y) = 0$$ to find the critical points on the boundary. Substituting: $$(x-1)^2 + (y-1)^2 = 4 \\\\ y(1-x)e^{-(x+y)} - \lambda 2(x-1) = 0\\\\ x(1-y)e^{-(x+y)} - \lambda 2(y-1) = 0$$ Assuming $x,y \neq 1$ we can divide the last two equations by $(x-1)$ and $(y-1)$ correspondingly, so they become: $$ye^{-(x+y)} = - 2\lambda \\\\ xe^{-(x+y)} = - 2\lambda$$ which implies $x = y$. Substituting into $g(x,y) = 4$, we have: $$2(x-1)^2 = 4 \quad \implies \quad x = 1 \pm \sqrt{2}$$ Corresponding values of $f$ are $$f(1 - \sqrt{2}, 1 - \sqrt{2}) = (3 - 2 \sqrt{2}) e^{-2 + 2\sqrt{2}} \approx 0.393 \\\\ f(1 + \sqrt{2}, 1 + \sqrt{2}) = (3 + 2 \sqrt{2}) e^{-2 - 2\sqrt{2}} \approx 0.0466$$ For the case where $x = 1$ or $y = 1$ we plug them directly into $g(x,y) = 4$ to get $$x = 1 \quad \implies \quad (y-1)^2 = 4 \quad \implies \quad y = -1, 3 \\\\ y = 1 \quad \implies \quad (x-1)^2 = 4 \quad \implies \quad x = -1, 3$$ Corresponding values of $f$ are $$f(-1,1) = f(1,-1) = -e^0 = -1 \\\\ f(3,1) = f(1,3) = 3e^{-4} \approx 0.0549$$ We conclude that the minimum value of $-1$ is attained at $(-1,1)$ and $(1,-1)$, and the maximum value of $(3 - 2 \sqrt{2}) e^{-2 + 2\sqrt{2}}$ is attained at $(1 - \sqrt{2}, 1 - \sqrt{2})$. - could you tell me if the critical points are (1,0) and (0,1) or not? I'm not sure – Muhammad Khalifa TranCer Dec 25 '12 at 21:41 – ybungalobill Dec 25 '12 at 21:44 Fx =-xye^-(x+y) +ye^-(x+y) = 0 so -xy+y = 0 and then y(1-x)=0 so y=0 and x=1 is a point is that right? – Muhammad Khalifa TranCer Dec 25 '12 at 21:48 @MuhammadKhalifaTranCer The critical points are not $(1,0)$ nor $(0,1)$ see my reply below. – Fly by Night Dec 25 '12 at 21:53 @MuhammadKhalifaTranCer: Nope. You have $y=0$ or $x=1$. The same goes for $f_y$. But this only gives you the interior points, which according to wolfram happen to not be the maxima and minima in the region. The extrema actually achieved on the boundary, which is the second half of the question. – ybungalobill Dec 25 '12 at 21:53 show 5 more comments There is a theorem, called the second derivative test for multiple variables, which states that you can find the maxima/minima of f by studying f at its critical points. You have limited the domain of this function, so we will only include values inside the domain, and use another method to find values on the boundary (Lagrange Multipliers). The theorem states that at a point $(a,b)$, where $f_x(a,b)=f_y(a,b)=0$, if f is defined on an open interval around $(a,b)$ then let $D=D(a,b)=f_{xx}(a,b)f_{yy}(a,b) - [f_{xy}(a,b)]^2$. There are 3 cases for $D$: 1) if $D>0$ and $f_{xx}>0$, then $f(a,b)$ is a local minimum 2) if $D>0$ and $f_{xx}<0$, then $f(a,b)$ is a local maximum 3) if $D<0$ then $f(a,b)$ is neither a local minimum nor a local maximum (saddle point) It's helpful to remember $D$ as the determinant of the hessian of the function. ie, $D=\left|\begin{array}{ll} f_{xx} & f_{xy} \\ f_{yx} & f_{yy} \end{array}\right| = f_{xx}f_{yy} - [f_{xy}f_{yx}]^2$ Note that we know though by Clairauts theorem, $f_{xy} = f_{yx}$ Ok, so coming back to your question: $f_x = (1-x)ye^{-(x+y)}\\ f_y = (1-y)xe^{-(x+y)}$ So there are critical points at (1,1) and (0,0). These both lie in the domain... And we also have that $f_{xx} = (x-2)ye^{-(x+y)}\\ f_{yy} = (y-2)xe^{-(x+y)}\\ f_{xy} = (x-1)(y-1)(e^{-(x+y)})$ So $D(a,b)=e^{-2(a+b)}((a-2)(b-2)ab - (a-1)^2(b-1)^2)$ So evaluate D at the critical points: $D(0,0) = e^0(-1) = -1 < 0\\ D(1,1) = e^{-4}(1-0) = e^{-4} > 0$ So $(0,0)$ is a point of inflection and $(1,1)$ is a local maximum... The local maximum for the unbounded problem is then $e^{-4} \approx 0.02$ We're not done yet. It could be that it grows/decays arbitrarily so the bounds you gave may hold local maxima/minima too. To solve this we use Lagrange multipliers. The way I like to do these is write the gradient of f as a linear combination of the gradients of the constraints. Often one can get away with little more than the AM-GM inequality. Never mind that though, onto Lagrange: $\nabla f(x,y) = \lambda \nabla g(x,y)$ And $g(x,y) = 4$ where here, $f(x,y) = xye^{-(x+y)}$ and $g(x,y) = (x-1)^2 + (y-1)^2$ I'll let you figure the partial derivatives for g out yourself. After that, we get the system: $\begin{array}{lcl} (1-x)ye^{-(x+y)} = \lambda 2(x-1) \\ (1-y)xe^{-(x+y)} = \lambda 2(y-1) \\ (x-1)^2 + (y-1)^2 = 4 \end{array}$ One could treat this casewise, if $x=1$ then $y=-1$ or $3$ if $y=1$ then $x=-1$ or $3$ and if $x$ and $y$ are neither $1$, then $x=y=1\pm \sqrt{2}$ Finally we must check these 8 points (2 boundary and 6 interior) as to which is maximal/minimal. My favorite way is to plug them in and see which are the highest/lowest... I've used python 2.7, my code is: ````points = [(1,-1),(1,3),(-1,1),(3,1),(1+sqrt(2),1+sqrt(2)),(1-sqrt(2),1-sqrt(2)),(1,1),(0,0)] def f(x,y): return x*y*e**(-(x+y)) maxima = max(f(p[0],p[1]) for p in points) minima = min(f(p[0],p[1]) for p in points) for x,y in points: if f(x,y)==minima: print 'f(%d,%d) = %1.3f is a local minima' % (x,y,f(x,y)) if f(x,y)==maxima: print 'f(%d,%d) = %1.3f is a local maxima' % (x,y,f(x,y)) ```` You will find then that the minima in the boundary are at $f(\pm 1, \mp 1) = -1$ and the maximum is achieved at $f(1-\sqrt{2},1-\sqrt{2}) = (3-2\sqrt{2})e^{2(\sqrt{2}-1)}$. - It happens that we can answer this question easily without resorting to Lagrange multipliers. In the sequel, let $R=[-1,3]^2$ and $D=\{(x-1)^2 + (y-1)^2 \le 4\} \subset R$. Let us deal with the minima first. Note that $f(x,y)=g(x)g(y)$ where $g(t)=te^{-t}$. The function $g$ is unimodal and its global maximum occurs at $t=1$, with $g(1)=e^{-1}>0$. Also, $g(t)\searrow-\infty$ when $t\rightarrow-\infty$ and $g(t)\searrow0$ when $t\rightarrow+\infty$. Therefore, on the square $R=[-1,3]^2$, $f$ is the most negative when $g(x)$ is the most positive and $g(y)$ is the most negative (or when $g(y)$ is the most positive and $g(x)$ is the most negative). Hence $$\min_{(x,y)\in[-1,3]^2}f(x,y) = \max_{x\in[-1,3]}g(x)\min_{y\in[-1,3]}g(y)=g(1)g(-1) = -1.$$ Since $(1,-1)\in D\subset[-1,3]^2$, we see that $\min\limits_D f(x,y)=f(1,-1)=-1$. By symmetry, $(-1,1)$ is another global minimum point. To find the maximum, we first note that, by a similar argument to the above, we have $\max_{D\cap[0,3]^2} f(x,y) = f(1,1) = e^{-2}>0$. It remains to compare this maximum value to $\max_{D\cap[-1,0]^2}$. Now, when $x,y\le 0$, we have, by A.M.$\ge$G.M., \begin{align} (-x)(-y) e^{-(x+y)} &\le \left[\frac{(-x)+(-y)}{2}\right]^2 e^{-(x+y)}\\ \Rightarrow\quad g(x)g(y) &\le g^2\left(\frac{x+y}{2}\right). \end{align} In other words, on $D\cap[-1,0]^2$, the maximum of $f$ occurs when $x=y$. Since $g(t)\le0$ and $g(t)\searrow-\infty$ when $t\rightarrow-\infty$, $f(x,x)$ attains maximum on $D\cap[-1,0]^2$ when $(x-1)^2+(x-1)^2=4$, i.e. when $x=1-\sqrt{2}$. As $f(1-\sqrt{2},\,1-\sqrt{2})=(3 - 2 \sqrt{2}) e^{-2 + 2\sqrt{2}}>e^{-2}$, we have confirmed that $(1-\sqrt{2},\,1-\sqrt{2})$ gives the global maximum of $f$ over $D$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 114, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8838642835617065, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/57165/question-about-the-linearity-of-wave-functions/57166
# Question about the linearity of wave functions For piece-wise constant potential, the potential energy is constant so the time dependent wave function can take the form $\psi(x,t)=C_1e^{i(kx- \omega t)}+C_2e^{i(-kx-\omega t)}$ where $k=\frac{p}{h}=\frac{\sqrt{2mE}}{h}$ and $E$ is just the difference between the energy of the particle and the constant potential. When testing $\psi(x,t)$ as an eigenfunction of the momentum operator, the result is that $-i\hbar \frac{\partial \psi(x,t)}{\partial t}=\hbar k(C_1e^{i(kx- \omega t)}-C_2e^{i(-kx-\omega t)}) \not= p\psi(x,t)$ But when testing the two components of the wave function separately, it is found that they are eigenfunctions. $-i\hbar \frac{\partial \psi_1(x,t)}{\partial t}=-i\hbar \frac{\partial}{\partial t} (C_1e^{i(kx-\omega t)})=-i \hbar (ik\psi_1)=\hbar k\psi_1=p_x\psi_1$ $-i\hbar \frac{\partial \psi_2(x,t)}{\partial t}=-i\hbar \frac{\partial}{\partial t} (C_2e^{i(-kx-\omega t)})=-i \hbar (-ik\psi_2)=-\hbar k\psi_2=-p_x\psi_2$ $\psi_1$ describes the particle moving in the $+x$ direction and $\psi_2$ describes the particle moving in the $-x$ direction. So if two solutions to the S.E. decribes definite states of momentum, the sum $\psi(x,t)$ does not necessarily have to describe a definite state of momentum? The functions $\psi(x,t),\psi_1(x,t), \psi_2(x,t)$ are eigenfunctions of the energy operator $i\hbar \frac{\partial}{\partial t}$, and the commutator $[H,P]=i\hbar \frac{\partial V(x)}{\partial x}$ which means that definite states of energy and momentum can be found simultaneously as long as we have a constant potential. (However, $H$ can only be used in the time-independent case right? So I'm not sure if this applies.) It's evident from the math above that $\psi(x,t)$ is not an eigenfunction of the momentum operator, but is there a physically reason for why it shouldn't be? - ## 1 Answer Correct. The state $\psi$ you've described is a superposition of two momentum eigenstates, with momentum $p$ and $-p$. So $\psi$ does not itself have a definite momentum - if you measured the momentum of a particle in this state you'd get either $p$ or $-p$ with a probability distribution that depends on the $C_1, C_2$. The energy of a free particle depends only on its speed, not its direction of movement, so the energy basis is degenerate. Here your state has a definite value of energy because in all cases, the particle is moving with the same speed. However, momentum is a vector quantity and does depend on the direction of movement. In general, a linear combination of two eigenstates of a given operator is not going to be another eigenstate of that operator - unless the two eigenstates also have the same eigenvalue. For example, here $\psi_1$ and $\psi_2$ are both eigenstates of $H$ with the same eigenvalue (energy) $E$, so $\psi$ is still an eigenvalue of $H$. But since $\psi_1$ and $\psi_2$ are eigenstates of $P$ with different eigenvalues (momenta) $p$ and $-p$, then $\psi$ is not an eigenstate of $P$. (By the way, you wrote: "$H$ can only be used in the time-independent case right?" It requires a time-independent potential. It's fine to use it with time-dependent wavefunctions.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269596934318542, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/18762/locality-in-quantum-mechanics?answertab=active
# Locality in Quantum Mechanics We speak of locality or non-locality of an equation in QM, depending on whether it has no differential operators of order higher than two. My question is, how could one tell from looking at the concrete solutions of the equation whether the equ. was local or not...or, to put it another way, what would it mean to say that a concrete solution was non-local? edit: let me emphasise this grew from a problem in one-particle quantum mechanics. (Klein-Gordon eq.) Let me clarify that I am asking what is the physical meaning of saying a solution, or space of solutions, is non-local. Answers that hinge on the form of the equation are...better than nothing, but I am hoping for a more physical answer, since it is the solutions which are physical, not the way they are written down .... This question, which I had already read, is related but the relation is unclear. Why are higher order Lagrangians called 'non-local'? - 2 I think determining locality based on solutions is going to be difficult in any non-relativistic context, because there is no speed limit - any local perturbation of the system may spread to the entire solution in zero time. It's a lot cleaner in a relativistic context; if any given part of a solution depends only on its past light-cone, the system is local. – Harry Johnston Dec 26 '11 at 4:31 Alright, let us assume we are in a relativistic context. I am thinking of the square root of the usual Klein-Gordon equation, so it is not a diff. eq. at all, but still has a propagator. – joseph f. johnson Dec 26 '11 at 4:40 I'm shaky on single-particle relativistic QM. But if you have a propagator, then you should be able to apply the same criteria: if a given part of a solution depends only on the past light-cone, the theory is local. – Harry Johnston Dec 26 '11 at 20:16 IC. (BTW, my impression is that everyone is shaky on fixed-finite-number-of-particles relativistic QM. this is why they hurry on the QFT). So looking at how fast a dirac delta functin spreads in time would be a good criterion. Then the non-rel. Schroedinger eq. is non-local in that sense, as is well known, but I guess any relativistic equation would have to satisfy this criterion. – joseph f. johnson Dec 26 '11 at 20:57 ## 4 Answers Presuming that there aren't nonlocal constraints, a differential operator that is polynomial in differential operators is local, it doesn't have to be quadratic. My understanding is that irrational or transcendental functions of differential operators are generally nonlocal (though that's perhaps a Question for math.SE). A given space of solutions implies a particular nonlocal choice of boundary conditions, unless the equations are on a compact manifold (which, however, is itself a nonlocal structure). There is always an element of nonlocality when we discuss solutions in contrast to equations. [For the anti-locality of the operator $(-\nabla^2+m^2)^\lambda$ for odd dimension and non-integer $\lambda$, one can see I.E. Segal, R.W. Goodman, J. Math. Mech. 14 (1965) 629 (for a review of this paper, see here).] EDIT: Sorry, I should have gone straight to Hegerfeldt's theorem. Schrodinger's equation is enough like the heat equation to be nonlocal in Hegerfeldt's sense. There are two theorems, from 1974 in PRD and from 1994 in PRL, but in arXiv:quant-ph/9809030 we have, of course with references to the originals, Theorem 1. Consider a free relativistic particle of positive or zero mass and arbitrary spin. Assume that at time $t=0$ the particle is localized with probability 1 in a bounded region V . Then there is a nonzero probability of finding the particle arbitrarily far away at any later time. Theorem 2. Let the operator $H$ be self-adjoint and bounded from below. Let $\mathcal{O}$ be any operator satisfying $$0\le \mathcal{O} \le \mathrm{const.}$$ Let $\psi_0$ be any vector and define $$\psi_t \equiv \mathrm{e}^{-\mathrm{i}Ht}\psi_0.$$ Then one of the following two alternatives holds. (i) $\left<\psi_t,\mathcal{O}\psi_t\right>\not=0$ for almost all $t$ (and the set of such t's is dense and open) (ii) $\left<\psi_t,\mathcal{O}\psi_t\right>\equiv 0$ for all $t$. Exactly how to understand Hegerfeldt's theorem is another question. It seems almost as if it isn't mentioned because it's so inconvenient (the second theorem, in particular, has a rather simple statement with rather general conditions), but a lot depends on how we define local and nonlocal. I usually take Hegerfeldt's theorem to be a non-relativistic cognate of the Reeh-Schlieder theorem in axiomatic QFT, although that's perhaps heterodox, where microcausality is close to the only definition of local. Microcausality is one of the axioms that leads to the Reeh-Schlieder theorem, so, no nonlocality. - I'd like to see this answer refined a little... but +1 for the reference. a) I've never seen a def'n of 'local operator'. b) Other posters, at other places, have claimed that any order higher than 2 becomes non-local. c) If written as Fourier Integral Operators, you could not easily tell the difference between a differential operator and a square root of one.... d) if being $L^2$ (which is what is in question) is considered a non-local boundary constraint, then Schroedinger's eq. would have to be considered non-local too, right? – joseph f. johnson Dec 25 '11 at 22:56 To be honest, I wrote this Answer supposing only that it might be marginally useful, but I don't know details about DEs well enough to go much further. Someone else can do better. I regret that the reference I gave is about the only definitive thing I can bring to the table. I think boundary conditions are relevant, but the Cauchy problem must be more relevant. Maybe there's no simple classification of local/nonlocal because the Cauchy problem is nontrivial? It also occurs to me that the heat equation, using the same differential operator as the Schrodinger equation, is nonlocal. – Peter Morgan Dec 26 '11 at 0:10 Is there a typo in the statement of Thm. One? should it not be, 'non-relativistic particle' ? – joseph f. johnson Dec 26 '11 at 21:13 From the arXiv paper, top of page 5, "relativity is not needed, as shown by the author and Ruijsenaars [11]"; in other words, the original theorem was for the relativistic case, so that's what has been repeated here, but one could equally well say relativistic or non-relativistic. – Peter Morgan Dec 26 '11 at 21:55 Hey Joseph, most of the time it's the references that matter here. Off the cuff Answers rarely compete with papers that represent months or years of work, unsurprisingly. Particularly true of me, sadly. – Peter Morgan Dec 27 '11 at 13:07 show 4 more comments The mathematical notion of a local operator is that if the operator $T$ is applied to a smooth function $f$, then $Tf(x)$ only depends on the behaviour of $f$ in a neighbourhood of $x$, and the neighbourhood can be arbitrarily small. In particular, the support of $Tf$ has to lie inside of the support of $f$. A local operator, in this sense, has to be a differential operator. This is a mathematical notion which is relevant to whether you could define $T$ on an arbitrary smooth manifold in a way indepenedent of coordinates, but it does not seem very physical since, as has been pointed out in this site (See Use of Operators in Quantum Mechanics, where the question « if I simply apply the momentum operator to the wave function $−ih{d \over dx}\Psi$ Will I get an Equation that will provide the momentum for a given position? Or is that a useless mathematical thing I just did? » is answered by « The first thing you did is useless-»), just taking an observable $Q$ or a Hamiltonian $H$ and applying it to a wave function $\psi$ is not very physical. What is physical is $e^{-iHt} \psi$ or the eigenvalues of $H$ or of $Q$. For this reason the question was formulated in terms of whether you could tell from the wave function or its evolution whether things were physically « local » or not. So far as I can tell, the only concrete answer to this is whether the time evolution would violate Einstein causality or not. It is well known that the notion of observable in Relativistic Quantum Mechanics is a tangled one (the Newton--Wigner position observables are famously non-local) and, on the other hand, the Born interpretation of the wave function becomes problematic as well (with the Klein--Gordon equation, it leads to negative probabilities). (These difficulties can be evaded in Quantum Field Theory, but then, as we all know, new difficulties arise.) Perhaps this calls into question the connection between « observables » in Relativistic Quantum Mechanics, even one-particle mechanics, and real, physical, measurements... Thx to all who tried to help, and especially the very useful references. And correct me if I have made a mistake here. - One way of saying this is in terms of the problem with fixed boundary conditions. If you have a differential equation with local derivative operators, and you impose a boundary condition that $\psi$ is zero on the surface of a thickened sphere of small positive width $\epsilon$ for all time, then anything you do on the interior of the sphere will not affect the exterior of the sphere. This is true when the Hamiltonian is a polynomial, since if you make a lattice approximation, only a finite number of lattice neighbors contribute to the time derivative at any point. If you make a nonlocal Hamiltonian, like taking the square root of $\nabla^2 + m^2$ to get the positive energy Klein Gordon propagator, you will not be able to express it in terms of finite number of lattice neighbors, and the particle will be able to leak out of the interior of a sphere to influence the exterior. I phrased it using additional boundary conditions because a nonrelativistic delta-function initial condition will spread to all space instantly even using a polynomial Hamiltonian, so it isn't local in the sense of finite maximum propagation speed. There is still a distinction between local and nonlocal operators, but it is best phrased in terms of how lattice position x depends on lattice position x' when x-x' becomes a large integer number of spacings. - so far, there are three different answers. Could you give a reference for this one? – joseph f. johnson Dec 26 '11 at 20:15 By « surface » do you mean « crust », i.e., are you including the region between the $r=1-\epsilon$ surface and the $r=1$ surface? I.e., you are requiring $\psi$ to vanish on a region of positive measure, so you are ruling out analytic solutions... – joseph f. johnson Dec 26 '11 at 21:20 @joseph: I don't know a reference. Why would you trust a reference more? You have to work it out in either case, reference or not. I am requiring $\psi$ to vanish on a region of positive measure. There is no analyticity anyway. The point is that the interior and exterior are decoupled when you have a local equation. – Ron Maimon Dec 27 '11 at 4:04 Well, I can't tell what you mean by « anything you do in the interior of the sphere ... influence the exterior.» Obviously after the passage of time a wave initially satisfying your conditions will stop fulfilling them. so I am hoping that either you, or a reference, can phrase your idea more clearly. What, precisely, do you mean by « anything you can do »? One thing I can do is pose a $\psi_o$ supported in the region... – joseph f. johnson Dec 27 '11 at 7:10 @Joseph: I meant imposing the condition for all times, not just at t=0. If you don't impose the condition for all times, then the wavefunction will leak out instantly. – Ron Maimon Dec 27 '11 at 7:42 show 6 more comments Here is an experimental physicist's view. Locality for me means that the solutions representing a particle give the particle as local, i.e. once the boundary conditions are given, all its observables and interactions depend on functions as f(x,y,z,t), (x,y,z,t) a point. Non local means that the solutions giving the observables and interactions of a particle depend on an extended volume around each space time point (x,y,z,t). Experimentally I would look for non locality in interactions, which would no longer be point interactions ( Feynman diagrams) but would have to be extended diagrams over the volume of non locality, and therefore the values measured for the observables would be different than the values predicted from the local theory if the non local theory holds, given enough experimental accuracy. - This is almost unintelligible. If there is one particle, what do you mean by 'its interactions' ? Are you saying that the concept of locality only applies to interactions between particles or between fields? If that is what you mean, perhaps you mean further that the answer to my question is that one should not speak of a one-particle QM equation as either local or non-local. ? – joseph f. johnson Dec 26 '11 at 4:39 Of course there is not one particle only, though locality applies to one particle solutions too. The one particle quantum mechanical equation is local, because the solutions that give observables depend simply on (x,y,z,t) not on convolutions at each point over some non local volume.As for interactions, all experimental data come from interactions of one form or the other. – anna v Dec 26 '11 at 5:34 Locality and non-locality are really used to assert on whether the very basic elementary particle interactions are local or non-local. This means the one particle equations and their solutions as well as the Quantum Field Theoretical representations of interactions of particle fields on a point and not a locus. – anna v Dec 26 '11 at 5:47 you speak of « the one-particle eq.» I am thinking of three different ones, which are often discussed in this context: Schroedinger's non-relativistic eq., the Dirac equ., and the Klein--Gordon eq. Correct me if I am wrong, but your point of view has them all as being local. – joseph f. johnson Dec 26 '11 at 5:52 Yes. All one particle QM equations are local. – anna v Dec 26 '11 at 6:27 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381503462791443, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/17425/list
## Return to Question Suppose I have a small category $\mathcal{C}$ which is fibered over some category $\mathcal{I}$ in the categorical sense. That is, there is a functor $\pi : \mathcal{C} \rightarrow \mathcal{I}$ which is a fibration of categories. (One way to say this, I guess, is that $\mathcal{C}$ has a factorization system consisting of vertical arrows, i.e. the ones that $\pi$ sends to an identity arrow in $\mathcal{I}$, and horizontal arrows, which are the ones it does not. But there are many other characterizations.) Now let $F : \mathcal{C} \rightarrow s\mathcal{S}$ be a diagram of simplicial sets indexed by $\mathcal{C}$. My question concerns the homotopy limit of $F$. Intuition tells me that there should be an equivalence $$\varprojlim_{\mathcal{C}} F \simeq \varprojlim_{\mathcal{I}} \left (\varprojlim_{\mathcal{C}_i} F_i \right )$$ where I write $\mathcal{C}_i = \pi^{-1}(i)$ for any $i \in \mathcal{I}$, $F_i$ for the restriction of $F$ to $\mathcal{C}_i$ and $\varprojlim$ for the homotopy limit. Intuitively this says that when $\mathcal{C}$ is fibered over $\mathcal{I}$, I can find the homotopy limit of a $\mathcal{C}$ diagram of spaces by first forming the homotopy limit of all the fibers, realizing that this collection has a natural $\mathcal{I}$ indexing, and then taking the homotopy limit of the resulting diagram. Does anyone know of a result like this in the model category literature? Update: After reading the responses, I was able to find a nice set of exercises here which go through this result in its homotopy colimit version. 2 added tag 1 # Homotopy Limits over Fibered Categories Suppose I have a small category $\mathcal{C}$ which is fibered over some category $\mathcal{I}$ in the categorical sense. That is, there is a functor $\pi : \mathcal{C} \rightarrow \mathcal{I}$ which is a fibration of categories. (One way to say this, I guess, is that $\mathcal{C}$ has a factorization system consisting of vertical arrows, i.e. the ones that $\pi$ sends to an identity arrow in $\mathcal{I}$, and horizontal arrows, which are the ones it does not. But there are many other characterizations.) Now let $F : \mathcal{C} \rightarrow s\mathcal{S}$ be a diagram of simplicial sets indexed by $\mathcal{C}$. My question concerns the homotopy limit of $F$. Intuition tells me that there should be an equivalence $$\varprojlim_{\mathcal{C}} F \simeq \varprojlim_{\mathcal{I}} \left (\varprojlim_{\mathcal{C}_i} F_i \right )$$ where I write $\mathcal{C}_i = \pi^{-1}(i)$ for any $i \in \mathcal{I}$, $F_i$ for the restriction of $F$ to $\mathcal{C}_i$ and $\varprojlim$ for the homotopy limit. Intuitively this says that when $\mathcal{C}$ is fibered over $\mathcal{I}$, I can find the homotopy limit of a $\mathcal{C}$ diagram of spaces by first forming the homotopy limit of all the fibers, realizing that this collection has a natural $\mathcal{I}$ indexing, and then taking the homotopy limit of the resulting diagram. Does anyone know of a result like this in the model category literature?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413126111030579, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/disorder+linear-algebra
# Tagged Questions 0answers 28 views ### Random quantum systems with asymmetric Lifshitz tails? For a quantum mechanical system with a periodic Hamiltonian (Schrödinger operator) $H$, let $N(E)$ be its integrated density of states, i.e. the fraction of eigenvalues in the spectrum $\sigma(H)$ ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7632317543029785, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/02/01/the-branching-rule-part-4/?like=1&source=post_flair&_wpnonce=5092e672ce
The Unapologetic Mathematician The Branching Rule, Part 4 What? More!? Well, we got the idea to look for the branching rule by trying to categorify a certain combinatorial relation. So let’s take the flip side of the branching rule and decategorify it to see what it says! Strictly speaking, decategorification means passing from a category to its set of isomorphism classes. That is, in the case of our categories of $S_n$-modules we should go from the Specht module $S^\lambda$ to its character $\chi^\lambda$. And that is an interesting question, but since the original relation turned into the dimensions of the modules in the branching rule, let’s do the same thing in reverse. So the flip side of the branching rule tells us how a Specht module decomposes after being induced up to the next larger symmetric group. That is: $\displaystyle S^\lambda\!\!\uparrow_{S_n}^{S_{n+1}}\cong\bigoplus\limits_{\lambda^+}S^{\lambda^+}$ To “decategorify”, we take dimensions $\displaystyle\begin{aligned}\dim\left(S^\lambda\!\!\uparrow_{S_n}^{S_{n+1}}\right)&=\dim\left(\bigoplus\limits_{\lambda^+}S^{\lambda^+}\right)\\&=\sum\limits_{\lambda^+}\dim\left(S^{\lambda^+}\right)\\&=\sum\limits_{\lambda^+}f^{\lambda^+}\end{aligned}$ As we see, the right hand side is obvious, but he left takes a little more work. We consult the definition of induction to find $\displaystyle S^\lambda\!\!\uparrow_{S_n}^{S_{n+1}}=\mathbb{C}\left[S_{n+1}\right]\otimes_{S_n}S^\lambda$ Calculating its dimension is a little easier if we recall what induction looks like for matrix representations: the direct sum of a bunch of copies of $S^\lambda$, one for each element in a transversal of the subgroup. That is, there are $\displaystyle\lvert S_{n+1}/S_n\rvert=\lvert S_{n+1}\rvert/\lvert S_n\rvert=\frac{(n+1)!}{n!}=n+1$ copies of $S^\lambda$ in the induced representation. We find our new relation: $\displaystyle(n+1)f^\lambda=\sum\limits_{\lambda^+}f^{\lambda^+}$ That is, the sum of the numbers of standard tableaux of all the shapes we get by adding an outer corner to $\lambda$ is $n+1$ times the number of standard tableaux of shape $\lambda$. This is actually a sort of surprising result, and there should be some sort of combinatorial proof of it. I’ll admit, though, that I don’t know of one offhand. If anyone can point me to a good one, I’d be glad to post it. 1 Comment » 1. Hi John, I *think* I have a combinatorial proof- I’d be happy to write it up and send you a copy if you’re interested. Cheers, Andy Comment by Andrew Poulton | February 4, 2011 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 13, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8889692425727844, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/20858/graph-for-fx-sin-x-cos-x
# Graph for $f(x)=\sin x\cos x$ Okay, so in my math homework I'm supposed to draw a graph of the following function: $$f(x)=\sin x \cos x.$$ I have the solution in the textbook, but I just can't figure out how they got to that. So, if someone could please post (a slightly more detailed) explanation for this, it would be really appreciated. I have to turn the homework this Wednesday, but since I already have the solution the answer doesn't have to be really swift. Bonus points if it is, tohugh - 1 What are the tools have at your disposal? In other words: you say this is a math homework. Is this for a calculus course, a pre-calculus course, a geometry course, a trigonometry course? The answer will depend on what tools you are expected to use. – Arturo Magidin Feb 7 '11 at 19:31 I'd say it's trigonometry, but I'm not 100% sure. We don't categorise courses like that here. – starship Feb 7 '11 at 19:38 1 What are the tools you are expected to use? Derivatives? Geometry? Knowledge about what sine and cosine look like? Trigonometric identities? Some or all of the above? Really: the answer depends entirely on what tools you have at your disposal. If you are expected to use calculus (derivatives), then you have one approach; if you don't know calculus, then the approach is different. – Arturo Magidin Feb 7 '11 at 19:40 Geometry. We didn't learn Derivatives, yet (we're doing that next year). And we're expected to know what since and cosine look like. – starship Feb 7 '11 at 19:45 Then Eric's answer is almost certainly the intended answer: use some trigonometric identities to get the function you want into an easier form, and then do the graph of that easier form. You should know what the graph of $\sin(2x)$ is, based on the graph of $y=\sin x$, and then how to go from the graph of $y=\sin 2x$ to $y=\frac{1}{2}\sin 2x$. – Arturo Magidin Feb 7 '11 at 19:50 ## 1 Answer Here is a hint: Can you draw the graph of $\sin x$? What about $a\sin{(bx)}$? Then recall the identity $$\sin{x} \cos{x} = \frac{1}{2} \sin {2x}.$$ Maybe that helps. Edit: This is just one way, there are many. As asked in the above comments, how are you supposed to solve it? What tools do you have your disposal? What type of things have you being taught so far? - Ah, that actually helped a lot. Lemme go try and draw it now, to see if it's alright. I'll let you know in a few minutes! ~Brb – starship Feb 7 '11 at 19:49 Oh, I could hug you. It helped so much! Thank you! <3 – starship Feb 7 '11 at 19:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9685040712356567, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Solving_Triangles&diff=20913&oldid=20906
# Solving Triangles ### From Math Images (Difference between revisions) | | | | | |-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | | | | Line 379: | | Line 379: | | | | ==History: Eratosthenes and the Earth== | | ==History: Eratosthenes and the Earth== | | | | | | | - | [[Image:Eratosthenes.jpg]] | + | [[Image:Eratosthenes.jpg|left]] | | | | | | | | In ancient Greece, mathematician Eratosthenes made a name for himself in the history books by calculating the circumference of the Earth by using shadows. | | In ancient Greece, mathematician Eratosthenes made a name for himself in the history books by calculating the circumference of the Earth by using shadows. | ## Revision as of 16:06, 17 June 2011 The Shadow Problem Field: Geometry Created By: Orion Pictures Website: [ http://en.wikipedia.org/wiki/File:Shadows_and_fog.jpg] The Shadow Problem In the 1991 film Shadows and Fog, the eerie shadow of a larger-than-life figure appears against the wall as the shady figure lurks around the corner. How tall really is the ominous character? Filmmakers use the geometry of shadows and triangles to make this special effect. The shadow problem is a standard type of problem for teaching trigonometry and the geometry of triangles. In the standard shadow problem, several elements of a triangle will be given. The process by which the rest of the elements are found is referred to as solving a triangle. # Basic Description A triangle has six total elements: three sides and three angles. Sides are valued by length, and angles are valued by degree or radian measure. According to postulates for Congruent triangles, given three elements, the other three elements can always be determined as long as at least one side length is given. Math problems that involve solving triangles, like shadow problems, typically provide certain information about just a few of the elements of a triangle, so that a variety of methods can be used to solve the triangle. Shadow problems normally have a particular format. Some light source, often the sun, shines down at a given angle of elevation. The angle of elevation is the smallest--always acute-- numerical angle measure that can be measured by swinging from the horizon. Assuming that the horizon is parallel to the surface on which the light is shining, the angle of elevation is always equal to the angle of depression. The angle of depression is the angle at which the light shines down, compared to the angle of elevation which is the angle at which someone or something must look up to see the light source. Knowing the angle of elevation or depression can be helpful because trigonometry can be used to relate angle and side lengths. In the typical shadow problem, the light shines down on an object or person of a given height. It casts a shadow on the ground below, so that the farthest tip of the shadow make a direct line with the tallest point of the person or object and the light source. The line that directly connects the tip of the shadow and the tallest point of the object that casts the shadow can be viewed as the hypotenuse of a triangle. The length from the tip of the shadow to the point on the surface where the object stands can be viewed as the first leg, or base, of the triangle, and the height of the object can be viewed as the second leg of the triangle. In the most simple shadow problems, the triangle is a right triangle because the object stands perpendicular to the ground. In the picture below, the sun casts a shadow on the man. The length of the shadow is the base of the triangle, the height of the man is the height of the triangle, and the length from the tip of the shadow to top of the man's head is the hypotenuse. The resulting triangle is a right triangle. In another version of the shadow problem, the light source shines from the same surface on which the object or person stands. In this case the shadow is projected onto some wall or vertical surface, which is typically perpendicular to the first surface. In this situation, the line that connects the light source, the top of the object and the tip of the shadow on the wall is the hypotenuse. The height of the triangle is the length of the shadow on the wall, and the distance from the light source to the base of the wall can be viewed as the other leg other leg of the triangle. The picture below diagrams this type of shadow problem, and this page's main picture is an example of one of these types of shadows. More difficult shadow problems will often involve a surface that is not level, like a hill. The person standing on the hill does not stand perpendicular to the surface of the ground, so the resulting triangle is not a right triangle. Other shadow problems may fix the light source at given height, like on a street lamp. This scenario creates a set of two similar triangles. Ultimately, a shadow problem asks you to solve a triangle by providing only a few elements of the possible six total. In the case of some shadow problems, like the one that involves two similar triangles, information about one triangle may be given and the question may ask to find elements of another. # A More Mathematical Explanation Note: understanding of this explanation requires: *Trigonometry, Geometry [Click to view A More Mathematical Explanation] ## Why Shadows? Shadows are useful in the set-up of a triangle pro [...] [Click to hide A More Mathematical Explanation] ## Why Shadows? Shadows are useful in the set-up of a triangle problems because of the way light works. A shadow is cast when light cannot shine through a solid surface. Light shines in a linear fashion, that is to say it does not bend. Light waves travel forward in the same direction in which the light was shined. Light is not like a liquid: it does not fill the space in which it shines like liquid assumes the shape of any container it's in. In addition to the linear fashion in which light shines, light has certain angular properties. When light shines on an objects that reflects light, it reflects back at the same angle at which it shined. Say a light shines onto a mirror. The angle between the beam of light and the wall that the mirror is the angle of approach. The angle from the wall at which the light reflects off of the mirror is the angle of departure. The angle of approach is equal to the angle of departure. In another example, a cue ball is bounced off of the wall of a pool table at a certain angle. Just like the way that light bounces off of the mirror, the cue ball bounces off the wall at exactly the same angle at which it hits the wall. The cue ball has the same properties as the beam of light in this case: the angle of departure is the same as the angle of approach. This property will help with certain types of triangle problems, particularly those that involve mirrors. ## More Than Just Shadows Shadow problems are just one type of problem that involves solving triangles. There are numerous other formats and set ups for unsolved triangle problems. Most of these problems are formatted as word problems, that is set up the problem in terms of some real life scenario. There are, however, many problems that simply provide numbers that represent angles and side lengths. In this type of problem, angles are denoted with capital letters, ${A, B, C,...}$, and the sides are denoted by lower-case letters,${a,b,c,...}$, where $a$ is the side opposite the angle $A$. Ladder Problems One other common problem in solving triangles is the ladder problem. A ladder of a given length is leaned up against a wall that stands perpendicular to the ground. The ladder can be adjusted so that the top of the ladder sits higher or lower on the wall and the angle that the ladder makes with the ground increases or decreases accordingly. Because the ground and the wall are perpendicular to one another, the triangles that need to be solved in ladder problems always have right angles. Since the right angle is always fixed, many ladder problems require the angle between the ground and the ladder, or the angle of elevation, to to be somehow associated with a fixed length of a ladder and the height of the ladder on the wall. In other words, ladder problems normally deal with the SAS scenario: they involve the length from the wall to the base of the ladder, the fixed length of the ladder itself, and the enclosed angle of elevation to determine the height at which the ladder sits on the wall. Mirror Problems Mirror problems are a specific type of triangle problem which involves two people or objects that stand looking into the same mirror. Because of the way a mirror works, light reflects back at the same angle at which it shines in, as explained below in A More Mathematical Explanation. In a mirror problem, the angle at which one person looks into the mirror, or the angle of vision is the same exact angle at which the second person looks the mirror. Typically, the angle at which one person looks into the mirror is given along with some other piece of information. Once that angle is known, then one angle of the triangle is automatically known since the light reflects back off of the mirror at the same angle, making the angle of the triangle next to the mirror the supplement to twice the angle of vision. Sight Problems Like shadow problems, sight problems include many different scenarios and several forms of triangles. Most sight problems are set up as word problems. They involve a person standing below or above some other person or object. In most of these problems, a person measures an angle with a tool called an astrolabe or a protractor, . In the most standard type of problem, a person uses the astrolabe to measure the angle at which he looks up or down at something. In the example at the right, the bear stands in a tower of a given height and uses the astrolabe to measure the angle at which he looks down at the forest fire. The problem asks to find how far away the forest fire is from the base of the tower given the previous information. ## Ways to Solve Triangles In all cases, a triangle problem will only give a few elements of a triangle and will ask to find one or more of the unknown elements. A triangle problem asks for one of the lengths or angle measures that is not given in the problem. There are numerous formulas, methods, and operations that can help to solve a triangle depending on the information given in the problem. The first step in any triangle problem is drawing a diagram. A picture can help to show which elements of the triangle are given and which elements are adjacent or opposite one another. By knowing where the elements are in relation to one another, we can use the trigonometric functions to relate angle and side lengths. There are numerous techniques which can be implemented in solving triangles: • Trigonometry: The Basic Trigonometric Functions relate side lengths to angles. By substituting the appropriate values into the formulas for sine, cosine, or tangent, trigonometry can help to solve for a particular side length or angle measure. This is useful when given a side length and an angle measure. • Pythagorean Theorem: The Pythagorean Theorem relates the squares of all three side lengths to one another in right triangles. This is useful when a triangle problem provides two side lengths and a third is needed. $a^{2}+b^{2} = c^{2}$ • Law of Cosines: The Law of Cosines is a generalization of the Pythagorean Theorem which can be used for solving non-right triangles. The law of cosines relates the squares of the side lengths to the cosine of one of the angle measures. This is particularly useful given a SAS configuration, or when three side lengths are known and no angles non-right triangles. $c^{2} = a^{2} + b^{2} - 2ab \cos C$ • Law of Sines: The Law of Sines is a formula that relates the sine of a given angle to the length of its opposite side. The law of sines is useful in any configuration when an angle measure and the length of its opposite side are given. It is also useful given an ASA configuration, and often the ASS configuration . The ASS configuration is known as The Ambiguous Case since it does not always provide one definite solution to the triangle. $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}$ When solving a triangle, one side length must always be given in the problem. Given an AAA configuration, there is no definite solution to the triangle. According to postulates for Congruent triangles, the AAA configuration proves similarity in triangles, but there is no way to find the side lengths of a triangle. ## Example Triangle Problems Example 1: Using Trigonometry A damsel in distress awaits her rescue from the tallest tower of the castle. A brave knight is on the way. He can see the castle in the distance and starts to plan his rescue, but he needs to know the height of the tower so he can plan properly. The knight sits on his horse 500 feet away from the castle. He uses his handy protractor to find the measure of the angle at which he looks up to see the princess in the tower, which is 15°. Sitting on the horse, the knight's eye level is 8 feet above the ground. What is the height of the tower? We can use tangent to solve this problem. For a more in depth look at tangent, see Basic Trigonometric Functions. Use the definition of tangent. $\tan =\frac{\text{opposite}}{\text{adjacent}}$ Plug in the angle and the known side length. $\tan 15^\circ =\frac{x ft}{500 ft}$ Clearing the fraction gives us $\tan 15^\circ (500) =x$ Simplify for $(.26795)(500) =x$ Round to get $134 ft \approx x$ But this is only the height of the triangle and not the height of the tower. We need to add 8 ft to account for the height between the ground and the knight's eye-level which served as the base of the triangle. $134 ft + 8 ft = h$ simplifying gives us $142 ft = h$ The tower is approximately 142 feet tall. Example 2: Using Law of Sines A man stands 100 feet above the sea on top of a cliff. The captain of a white-sailed ship looks up at a 45° angle to see the man, and the captain of a black-sailed ship looks up at a 30° angle to see him. How far apart are the two ships? To solve this problem, we can use the law of sines to solve for the bases of the two triangles since we have an AAS configuration with a known right angle. To find the distance between the two ships, we can take the difference in length between the two ships. First, we need to find the third angle for both of the triangles. Then we can use the law of sines. For the white-sailed ship, $180^\circ - 90^\circ - 45^\circ = 45^\circ$ Let the distance between this boat and the cliff be denoted by $a$. By the law of sines, $\frac{100}{\sin 45^\circ} = \frac{a}{\sin 45^\circ}$ Multiplying both sides by $\sin 45^\circ$ gives us $(\sin 45^\circ)\frac{100}{\sin 45^\circ} = a$ Simplify for $a = 100 ft$ For the black-sailed ship, $180^\circ - 90^\circ - 30^\circ =60^\circ$ Let the distance between this boat and the cliff be denoted by $b$. By the law of sines, $\frac{100}{\sin 30^\circ} = \frac{b}{\sin 60^\circ}$ Clear the fractions to get, $100(\sin 60^\circ) = b(\sin 30^\circ)$ Compute the sines of the angle to give us $100\frac{\sqrt{3}}{2} = b\frac{1}{2}$ Simplify for $100(\sqrt{3}) = b$ Multiply and round for $b =173 ft$ The distance between the two boats, $x$, is the positive difference between the lengths of the bases of the triangle. $b-a=x$ $173-100 = 73 ft$ The boats are about 73 feet apart from one another. Example 3: Using Multiple Methods At the park one afternoon, a tree casts a shadow on the lawn. A man stands at the edge of the shadow and wants to know the angle at which the sun shines down on the tree. If the tree is 51 feet tall and if he stands 68 feet away from the tree, what is the angle of elevation? There are several ways to solve this problem. The following solution uses a combination of the methods described above. First, we can use Pythagorean Theorem to find the length of the hypotenuse of the triangle, from the tip of the shadow to the top of the tree. $a^{2}+b^{2} = c^{2}$ Substitute the length of the legs of the triangle for $a, b$ $51^{2}+68^{2} = c^{2}$ Simplifying gives us $2601+4624 = c^{2}$ $7225 = c^{2}$ Take the square root of both sides for $\sqrt{7225} = c$ $85 = c$ Next, we can use the law of cosines to find the measure of the angle of elevation. $a^{2}=b^{2}+c^{2} - 2bc \cos A$ Plugging in the appropriate values gives us $51^{2}=68^{2}+85^{2} - 2(85)(68) \cos A$ Computing the squares gives us $2601= 4624+7225 - 11560 \cos A$ Simplify for $2601= 11849 - 11560 \cos A$ Subtract $11849$ from both sides for $-9248= -11560 \cos A$ Simplify to get $.8 = \cos A$ Use inverse trigonometry to find the angle of elevation. $A = 37^\circ$ # Why It's Interesting Shadow Problems are one of the most common types of problem used in teaching trigonometry. The paradigm set up by a shadow problem is simple, visual, and easy to remember. Though an easy method by which to learn trigonometry, shadow problems are commonly used and highly applicable. Shadows, while an effective paradigm in a word problem, can even be useful in real life applications. In this section, we can use real life examples of using shadows and triangles to calculate heights and distances. ## Example: Sizing Up Swarthmore The Clothier Bell Tower is the tallest building on Swarthmore College's campus, yet few people know exactly how tall the tower stands. We can use shadows to determine the height of the tower. Here's how: Step 1) Mark the shadow of the of the tower. Make sure to mark the time of day. The sun is at different heights throughout the day. The shadows are longest earlier in the morning and later in the afternoon. At around midday, the shadows aren't very long, so it might be harder to find a good shadow. When we marked the shadow of the bell tower, it was around 3:40 pm in mid-June. Step 2) After marking the shadow, we can measure the distance from our mark to the bottom of the tower. This length will serve as the base of our triangle. In this case, the length of the shadow was 111 feet. Step 3) Measure the angle of the sun at that time of day. Use a yardstick to make a smaller, more manageable triangle. Because the sun shines down at the same angle as it does on the bell tower, the small triangle and the bell tower's triangle are similar and therefore have the same trigonometric ratios. • Stand the yardstick so it's perpendicular to the ground so that it forms a right angle. The sun will cast a shadow. Mark the end of the shadow with a piece of chalk. • Measure the length of the shadow. This will be considered the length of the base of the triangle. • Draw a diagram of the triangle made by connecting the top of the yardstick to the marked tip of the shadow. Use inverse trigonometry to determine the angle of elevation. $\tan X = \frac{36 in}{27 in}$ $\arctan \frac{36}{27} = X$ $\arctan \frac{4}{3} = X$ $X = 53^\circ$ Step 4) Now, we can use trigonometry to solve the triangle for the height of the bell tower. $\tan 53^\circ = \frac{h}{111 ft}$ Clearing the fractions, $111 (\tan 53^\circ) = h$ Plugging in the value of $\tan 53^\circ$ gives us $111 \frac{4}{3} = h$ Simplify for $148 ft = h$ According to our calculations, the height of the Clothier Bell Tower is 148 feet. ## History: Eratosthenes and the Earth In ancient Greece, mathematician Eratosthenes made a name for himself in the history books by calculating the circumference of the Earth by using shadows. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240167140960693, "perplexity_flag": "head"}
http://mathoverflow.net/questions/43833/a-markov-process-which-is-not-a-strong-markov-process/43841
## A Markov process which is not a strong markov process? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Can anyone give an example of a Markov process which is not a strong Markov process? The Markov property and strong Markov property are typically introduced as distinct concepts (for example in Oksendal's book on stochastic analysis), but I've never seen a process which satisfies one but not the other. Many thanks -Simon - ## 4 Answers Consider the following continuous Markov process X, starting from position x 1. if x = 0 then Xt = 0 for all times. 2. if x ≠ 0 then X is a standard Brownian motion starting from x. This is not strong Markov (look at times at which it hits zero). - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let $X(t) = f(W(t) + \pi)$, where $W(t)$ is a standard Wiener process and $$f(x) = \begin{cases} (x,0), & x\leq 0 \\ \\ (\sin x,1-\cos x), & 0 < x < 2\pi \\ \\ (x-2\pi,0), & x\geq 2\pi \end{cases}$$ is a map from $\mathbb R$ to $\mathbb R^2$. $X(t)$ is an $\mathbb R^2$-valued Markov process on $\mathbb R_+$ which is not strongly Markovian. See "A Modern Approach to Probability Theory" by Fristedt and Gray (1997, pp. 626–627). If the time set is discrete, the ordinary Markov property implies the strong Markov property. - 1 A little complicated to follow, but quite neat. I see it works because the curve given by f intersects itself so, if you stop at an intersection point, you don't know which part of the curve the process is currently moving on. – George Lowther Oct 27 2010 at 17:29 A standard example is Exercise 6.17 in Sharpe's book The general theory of Markov processes. The process stays at zero for an exponential amount of time, then moves to the right at a uniform speed. - Ah, this is quite a simple example. It fails the strong Markov property, but not as badly as Andrey's and my example. This one still satisfies a restricted version of the strong Markov property, where you only look at predictable stopping times. It is "moderate Markov" (books.google.co.uk/…). – George Lowther Oct 27 2010 at 17:51 2 I can't resist giving this quote from Kai Lai Chung's "Lectures from Markov Processes to Brownian Motion (1982)" -- It may be difficult for the novice to appreciate the fact that twenty five years ago a formal proof of the strong Markov property was a major event. Who now is interested in an example in which it does not hold? – Byron Schmuland Oct 28 2010 at 5:02 I did not quite get the first answer(the one using Brownian Motion). If the process starts at x(not equal to 0), the distribution of X(0) is delta(x) and transition kernels are that of brownian motion and if x = 0 then distribution of x(0) is delta(0) and transition kernels according as a constant stochastic process. How do we mix the 2 processes? Sorry if I am missing something silly. - @vinay For $x\ne0$ and $t>0$, let $p_t(x,\cdot)$ denote the Gaussian distribution with mean $x$ and variance $t$ and $p_t(0,\cdot)$ denote the Dirac measure at $0$. For every $x$, let $p_0(x,\cdot)$ denote the Dirac measure at $x$. Then, for every bounded measurable $\varphi$, initial distribution $\nu$ and times $0=t_0\le t_1\le \cdots\le t_n$, $E_\nu[\varphi(X(t_0),X(t_1),\ldots,X(t_n))]$ is the integral you know, involving $\varphi$, $\nu$ and the semi-group $(p_t)_{t\ge0}$. QED. In fact, a good way to understand this example is to try to prove that $(p_t)_{t\ge0}$ is indeed a semi-group. – Didier Piau Feb 22 2011 at 12:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293833374977112, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/143584/related-rates-with-a-cylinder
# Related rates with a cylinder A cylindrical tank with radius 5m is being filled with water at a rate of $3m^3/min$ How fast is the height of the water increasing? I know that $$dv/dt = 3$$ And that $$v= \pi r^2 h$$ So I want to find the derivative of h $$h= \frac {v}{ \pi r^2}$$ $$h' = \frac{3 \pi * 25}{ (\pi * 25)^2 }$$ $h' * v'$ This gives an incorrect answer and I am not sure why. - 1 We have as you said $h=\frac{v}{\pi r^2}$. Note that $r$ is a constant. Differentiate with respect to time. We get $\frac{dh}{dt}=\frac{1}{\pi r^2}\frac{dv}{dt}$. – André Nicolas May 10 '12 at 18:04 ## 2 Answers You want $\dfrac{dh}{dt}$; by the chain rule this is $\dfrac{dh}{dv}\dfrac{dv}{dt}$. You have $h=\dfrac{v}{\pi r^2}=\dfrac1{\pi r^2}v$, where $\dfrac1{\pi r^2}$ is a constant, so $\dfrac{dh}{dv}=\dfrac1{\pi r^2}$; you don't need the quotient rule for this differentiation. Finally, you have $\dfrac{dv}{dt}=3$, so $$\frac{dh}{dt}=\frac{dh}{dv}\frac{dv}{dt}=\frac3{\pi r^2}\text{ m/min}\;.$$ Since $r=5$ m, the actual rate is $\dfrac3{25\pi}$ m/min. In a problem like this it's a good idea to use the $\dfrac{dv}{dt}$ notation instead of the $v'$ notation, because you're taking derivatives with respect to more than one variable, and you need to keep track of what that variable is in each calculation. - Hint: $\displaystyle h'(t) = \frac{v'(t)}{\pi r^2}$, since $r$ is constant. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9631375670433044, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/221028/permutations-and-the-binomial-coefficient
# permutations and the binomial coefficient I have seen several times the use of "n choose k" in the left side of the permutations formula. However, this expression is usually referred to be used with combinations. Not that this change when or how using "permutations" or "subsets" according to the context. But I wonder why the binomial coefficient is used in permutations context. Thanks. Permutation: $$(𝑛¦𝑘)=𝑛!/(𝑛−𝑘)!$$ Combination: $$(𝑛¦𝑘)=𝑛!/𝑘!(𝑛−𝑘)!$$ - ## 1 Answer For non-negative integers $n$ and $k$ the binomial coefficient $\binom{n}k$ is the number of $k$-sized subsets of a set of $n$ things: it’s the number of different combinations of $k$ of the $n$ elements. The members of a $k$-sized set can be listed in $k!$ different orders, so there are altogether $k!\binom{n}k$ permutations of $k$-sized subsets of the original set of $n$ things, $k!$ orderings of each of $\binom{n}k$ sets. But $\dbinom{n}k=\dfrac{n!}{k!(n-k)!}$, so $$k!\binom{n}k=k!\cdot\frac{n!}{k!(n-k)!}=\frac{n!}{(n-k)!}=n(n-1)(n-2)\dots(n-k+1)\;,$$ and you often see the formula for the number of permutations of $k$-sized subsets of an $n$-set expressed in one of these last two ways instead of as $k!\binom{n}k$. In some ways these are simpler than the form $k!\binom{n}k$. However, because the binomial coefficients turn out to be rather easy to work with, and because many relationships involving them are known, it’s often convenient to write the number of permutations as $k!\binom{n}k$ in order to take advantage of those known relationships. - Thank you Brian! Nice explanation. – Diego Oct 25 '12 at 19:10 @Diego: Thanks! You’re welcome. – Brian M. Scott Oct 25 '12 at 19:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9079676270484924, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Grashof_number
# Grashof number The Grashof number $\mathrm{Gr}$ is a dimensionless number in fluid dynamics and heat transfer which approximates the ratio of the buoyancy to viscous force acting on a fluid. It frequently arises in the study of situations involving natural convection. It is named after the German engineer Franz Grashof. $\mathrm{Gr_L} = \frac{g \beta (T_s - T_\infty ) L^3}{\nu ^2}\,$ for vertical flat plates $\mathrm{Gr}_D = \frac{g \beta (T_s - T_\infty ) D^3}{\nu ^2}\,$ for pipes $\mathrm{Gr}_D = \frac{g \beta (T_s - T_\infty ) D^3}{\nu ^2}\,$ for bluff bodies where the L and D subscripts indicates the length scale basis for the Grashof Number. g = acceleration due to Earth's gravity β = volumetric thermal expansion coefficient (equal to approximately 1/T, for ideal fluids, where T is absolute temperature) Ts = surface temperature T∞ = bulk temperature L = length D = diameter ν = kinematic viscosity The transition to turbulent flow occurs in the range $10^8 < \mathrm{Gr}_L < 10^9$ for natural convection from vertical flat plates. At higher Grashof numbers, the boundary layer is turbulent; at lower Grashof numbers, the boundary layer is laminar. The product of the Grashof number and the Prandtl number gives the Rayleigh number, a dimensionless number that characterizes convection problems in heat transfer. There is an analogous form of the Grashof number used in cases of natural convection mass transfer problems. $\mathrm{Gr}_c = \frac{g \beta^* (C_{a,s} - C_{a,a} ) L^3}{\nu^2}$ where $\beta^* = -\frac{1}{\rho} \left ( \frac{\partial \rho}{\partial C_a} \right )_{T,p}$ and g = acceleration due to Earth's gravity Ca,s = concentration of species a at surface Ca,a = concentration of species a in ambient medium L = characteristic length ν = kinematic viscosity ρ = fluid density Ca = concentration of species a T = constant temperature p = constant pressure ## Derivation of Grashof Number The first step to deriving the Grashof Number Gr is manipulating the volume expansion coefficient, $\mathrm{\beta}$ as follows: $\mathrm{\beta}=\frac{1}{v}\left(\frac{\partial v}{\partial T}\right)_p \mathrm=\frac{-1}{\rho}\left(\frac{\partial\rho}{\partial T}\right)_p$ This partial relation of the volume expansion coefficient, $\mathrm{\beta}$ with respect to fluid density, $\mathrm{\rho}$ and constant pressure can be rewritten as $\mathrm \rho=\rho_o (1-\beta \Delta T)$ and $\mathrm{\rho_o}$ - bulk fluid density $\mathrm{\rho}$ - boundary layer density $\mathrm{\Delta T}=(T-T_o)$ - temperature difference between boundary layer and bulk fluid There are two different ways to find the Grashof Number from this point. One involves the energy equation while the other incorporates the buoyant force due to the difference in density between the boundary layer and bulk fluid. ### Energy Equation This discussion involving the energy equation is with respect to rotationally symmetric flow. This analysis will take into consideration the effect of gravitational acceleration on flow and heat transfer. The mathematical equations to follow apply both to rotational symmetric flow as well as two-dimensional planar flow. $\mathrm{\frac{\partial}{\partial s}}(\rho u r_o^{n})+{\frac{\partial}{\partial y}}(\rho \nu r_o^{n})=0$ $\mathrm{s}$ - rotational direction $\mathrm{u}$ - tangential velocity $\mathrm{y}$ - planar direction $\mathrm{\nu}$ - normal velocity $\mathrm{r_o}$ - radius This equation expands to the following with the addition of physical fluid properties: $\mathrm{\rho}\left(u \frac{\partial u}{\partial s}+\nu \frac{\partial u}{\partial y}\right)=\frac{\partial}{\partial y}\left(\mu \frac{\partial u}{\partial y}\right)-\frac{d \mathrm{p}}{d s}+\rho g$ In this equation the superscript n is to differentiate between rotationally symmetric flow from planar flow. The following characteristics of this equation hold true. $\mathrm{n}=1$ - rotationally symmetric flow $\mathrm{n}=0$ - planar, two-dimensional flow $\mathrm{g}$ - gravitational acceleration From here we can further simplify the momentum equation by setting the bulk fluid velocity to 0. $\mathrm{u}=0$ $\mathrm \frac{d p}{d s}=\rho_o g$ This relation shows that the pressure gradient is simply a product of the bulk fluid density and the gravitational acceleration. The next step is to plug in the pressure gradient into the momentum equation. $\mathrm{\rho\left(u \frac{\partial u}{\partial s}+\nu \frac{\partial u}{\partial y}\right)}=\mu \left(\frac{\partial^2 u}{\partial y^2}\right)+\rho g \beta (T-T_o)$ Further simplification of the momentum equation comes by substituting the volume expansion coefficient, density relationship $\mathrm{\rho_o-\rho}=\beta \rho(T-T_o)$ found above into the momentum equation. $\mathrm u\left(\frac{\partial u}{\partial s}\right)+\nu \left(\frac{\partial \nu}{\partial y}\right)=\nu \left(\frac{\partial^2 u}{\partial y^2}\right)+g \beta(T-T_o)$ To find the Grashof Number from this point the preceding equation must be non-dimesionalized. This means that every variable in the equation should have no dimension. This is done by dividing each variable by corresponding constant quantities. Lengths are divided by a characteristic length $\mathrm L_c,$. Velocities are divided by appropriate reference velocities $\mathrm V,$ which considering the Reynolds number gives $\mathrm V=\frac{Re_L \nu}{L_c}.$ Temperatures are divided by the appropriate temperature difference $\mathrm(T_s-T_o).$ These dimensionless parameters look like the following: $\mathrm{s^*}=\frac{s}{L_c}$, $\mathrm{y^*}=\frac{y}{L_c}$, $\mathrm{u^*}=\frac{u}{V}$, $\mathrm{\nu^*}=\frac{\nu}{V}$, $\mathrm{T^*}=\frac{(T-T_o)}{(T_s-T_o)}$. The asterisks represent dimensionless parameter. Combining these dimensionless equations with the momentum equations gives the following simplified equation. $\mathrm{u^* \frac{\partial u^*}{\partial s^*}+\nu^* \frac{\partial u^*}{\partial y^*}}=\left [\frac{g \beta(T_s-T_o)L_c^{3}}{\nu^2} \right ] \frac{T^*}{Re_L^{2}}+\frac{1}{Re_L} \frac{\partial^2 u^*}{\partial y^*}$ $\mathrm{T_s}$ - surface temperature $\mathrm{T_o}$ - bulk fluid temperature $\mathrm{L_c}$ - characteristic length The dimensionless parameter enclosed in the brackets in the preceding equation is known as the Grashof Number $\mathrm{Gr}=\frac{g \beta(T_s-T_o)L_c^{3}}{\nu^2}$ ### Buckingham Pi Theorem Another form of dimensional analysis that will result in the Grashof Number is known as the Buckingham Pi theorem. This method takes into account the buoyancy force per unit volume, $\mathrm{F_b}$ due to the density difference in the boundary layer and the bulk fluid. $\mathrm{F_b}=(\rho-\rho_o)g$ This equation can be manipulated to give, $\mathrm{F_b}=\beta g \rho_o \Delta T$ The list of variables that are used in the Buckingham Pi method is listed below, along with their symbols and dimensions. Variable Symbol Dimensions Significant Length $\mathrm{L}$ $\mathrm{L}$ Fluid Viscosity $\mathrm{\mu}$ $\mathrm{\frac{M}{L t}}$ Fluid Heat Capacity $\mathrm{c_p}$ $\mathrm{\frac{Q}{M T}}$ Fluid Thermal Conductivity $\mathrm{k}$ $\mathrm{\frac{Q}{L t T}}$ Volume Expansion Coefficient $\mathrm{\beta}$ $\mathrm{\frac{1}{T}}$ Gravitational Acceleration $\mathrm{g}$ $\mathrm{\frac{L}{t^2}}$ Temperature Difference $\mathrm{\Delta T}$ $\mathrm{T}$ Heat Transfer Coefficient $\mathrm{h}$ $\mathrm{\frac{Q}{L^2 t T}}$ With reference to the Buckingham Pi Theorem there are 9-5=4 dimensionless groups. Choose L, $\mathrm{\mu},$ k, g and $\mathrm{\beta}$ as the reference variables. Thus the $\mathrm{\pi}$ groups are as follows: $\mathrm{\pi_1}=L^a \mu^b k^c \beta^d g^e c_p$, $\mathrm{\pi_2}=L^f \mu^g k^h \beta^i g^j \rho$, $\mathrm{\pi_3}=L^k \mu^l k^m \beta^n g^o \Delta T$, $\mathrm{\pi_4}=L^q \mu^r k^s \beta^t g^u h$. Solving these $\mathrm{\pi}$ groups gives: $\mathrm{\pi_1}=\frac{\mu(c_p)}{k}=Pr$, $\mathrm{\pi_2}=\frac{l^3 g \rho^2}{\mu^2}$, $\mathrm{\pi_3}=\beta \Delta T$, $\mathrm{\pi_4}=\frac{h L}{k}=Nu$ From the two groups $\mathrm{\pi_2}$ and $\mathrm{\pi_3},$ the product forms the Grashof Number $\mathrm{\pi_2 \pi_3}=\frac{\beta g \rho^2 \Delta T L^3}{\mu^2}=Gr$ Taking $\mathrm{\nu}=\frac{\mu}{\rho}$ and $\mathrm{\Delta T}=(T_s-T_o)$ the preceding equation can be rendered as the same result from deriving the Grashof Number from the energy equation. $\mathrm{Gr}=\frac{\beta g (T_s-T_o) L^3}{\nu^2}$ In forced convection the Reynolds Number governs the fluid flow. But, in natural convection the Grashof Number is the dimensionless parameter that governs the fluid flow. Using the energy equation and the buoyant force combined with dimensional analysis provides two different ways to derive the Grashof Number. ## References • Jaluria, Yogesh. Natural Convection Heat and Mass Transfer (New York: Pergamon Press, 1980). • Cengel, Yunus A. Heat and Mass Transfer: A Practical Approach, 3rd Edition (Boston: McGraw Hill, 2003). • Eckert, Ernst R. G. and Drake, Robert M. Analysis of Heat and Mass Transfer (New York: McGraw Hill, 1972). • Welty, James R. Fundamentals of Momentum, Heat, and Mass Transfer (New York: John Wiley & Sons, 1976).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 81, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8546337485313416, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/14509/various-concepts-of-closure-or-completion-in-mathematics/14511
## Various concepts of “closure” or “completion” in mathematics ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Out of idle curiosity, I'm wondering about all the various idempotent constructions we have in mathematics (they seem to be generally referred to as a "closure" or "completion"), and how some of them are related (e.g., the radical of an ideal and the closure of a subset of $k^n$ in the Zariski topology, via the Nullstellensatz - the radical and the topological closure both being idempotent). So, one answer per post, but if you have two concepts which are related, I guess it'd be okay to put them together. For the sake of the completeness (ha ha) of this list, I'll add "radical" and "topological closure". EDIT: My bad - I should have looked around more first. There's this list at Wikipedia and this list at nLab. Well, I'm sure there's plenty more concepts out there, so if you think of any more, feel free to add them. But let's focus on how some of these concepts are related - e.g., does one kind of completion arise in terms of another? What are some general ways in which completions and closures arise? - en.wikipedia.org/wiki/… – Martin Brandenburg Feb 7 2010 at 17:29 Yes, I'm realizing that there are some good lists already out there. I'll change the focus of the question. – Zev Chonoles Feb 7 2010 at 17:34 Order completion is necessary to even define metric space completion since one should construct R as the order-completion of Q. So I guess one should think of metric space completion as induced from order completion. – Qiaochu Yuan Feb 7 2010 at 17:37 Well, I guess that example isn't the best, but that's the kind of thing I'm thinking of - explaining one kind of completion in terms of, or induced by, another. – Zev Chonoles Feb 7 2010 at 17:41 1 Anyway, I think a more interesting question to ask would be something like "what are general ways in which closures and completions arise?" – Qiaochu Yuan Feb 7 2010 at 17:41 show 2 more comments ## 11 Answers In general, if $A\subset B$ is a full reflective subcategory, then each object $a\in A$ is isomorphic to its image under the reflector. This seems to include many cases: $\mathbf{Ab}\subset \mathbf{Grp}$, $\mathbf{CompMet}\subset\mathbf{Met}$, `$\mathbf{Top}_{n+1}\subseteq \mathbf{Top}_{n}$` (as in http://mathoverflow.net/questions/9504/why-is-top4-a-reflective-subcategory-of-top3), etc. EDIT: Following Pete L. Clark's comment, here is a clarification: The subcategory $A$ above is called reflective if the inclusion functor $A\subset B$ has a left adjoint, and full if this inclusion functor is full. In case $A$ is a reflective subcategory, the left adjoint to the inclusion functor is called a reflector. - 1 This is the point, I think. (Though it might be worth restating it in more broadly familiar categorical language: e.g. adjoint to the forgetful functor.) The listing of examples ad infinitum is not as insightful or useful as this one observation. – Pete L. Clark Feb 7 2010 at 22:22 @PLC: Thank you very much for your comment. I have edited the response accordingly for clarification. – unknown (google) Feb 7 2010 at 22:43 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The coolest closure operation I know occurs in Razborov's lower bound for the monotone circuit complexity of the clique function. In that proof he needs a class of "simple" set systems, and to get it he defines a very ingenious k-ary operation on sets and defines a simple set system to be one that is closed under that operation. I don't know what general moral to draw from that, but it's fairly different from the examples mentioned so far. - Just about every form of compactification. The compactification of a compact space is itself, and a compactification had better be compact or it shouldn't be called a compactification. The same thing goes for completion of metric spaces, of course. (I know, shouldn't post two answers in one for this kind of list, but they're closely related and trivially known for everybody. I put them here for completeness' sake (no pun intended).) - Ah! You edited your question! I had to delete my answer. Anyway, here is a general scenario where idempotent operations such as the one you want arise: In this para I am going to be vague. But the examples below given should illustrate what I have in mind.Ok, so, You have "some structure" somewhere. You want to go to the "maximal" of such a thing. You have a natural ordering on such structures you want. And also it so happens that the union of a chain of such stuff is again such a thing. Then you apply Zorn's lemma to find the maximal thing. And this operation of going and finding the maximal thing is an "idempotent completion" in your sense. There are plenty of examples. A few: $1$. A set of linearly independent vectors in a vector space is enlarged to a basis. $2$. An algebraic extension of a field is enlarged to the algebraic closure. $3$. A separable extension of a field, is enlarged to separable closure. $4$. A differentiable atlas on a smooth manifold is enlarged to a maximal one, ie., a differentiable structure. $5$. A certain functional on a Banach space is enlarged to fill the whole space, as in the proof of Hahn-Banach theorem. And so on, nearly in fact every application of Zorn's lemma. This is not a functorial way to go; but the construction as an operation is idempotent. And in some way, such as in the construction of the algebraic closure, we have an isomorphism of two such different constructions. - That's a very good point about Zorn's-Lemma-constructed objects! I never thought about it that way before. – Zev Chonoles Feb 7 2010 at 22:18 Well, all I said here was that a maximal thing is indeed a maximal thing. :-). – Feb7 Feb 7 2010 at 22:26 The radical of an ideal - we have $\sqrt{\sqrt{I}}=\sqrt{I}$. - The closure of a set in a topology - we have $\overline{\bar{X}}=\bar{X}$. - The total ring of fractions for a ring - we have $Q(Q(R))\simeq Q(R)$. - Projection operators on a linear space are precisely the idempotents. All these other examples are somewhat like linear projections in that they are projecting from a category to a subcategory. - In Model theory we have Skolem hulls: Assuming that each formula has a corresponding Skolem function, one can take a given subset A of a model M, and close it under Skolem functions. This closure Sk(A) is the smallest elementary submodel of M containing A. - Look at Kuratowski closures for instance or abstract closures. - A convex hull is also a form of closure. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942787230014801, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2008/08/12/additive-functions/?like=1&source=post_flair&_wpnonce=3fdfac1636
# The Unapologetic Mathematician ## Additive Functions I’ve got internet access again, but I’m busy with a few things today, like assembling my furniture. Luckily, Tim Gowers has a post on “How to use Zorn’s lemma”. His example is the construction of additive (not linear) functions from $\mathbb{R}$ to itself. In practice, as he points out, this is equivalent to defining linear functions (not just additive) from $\mathbb{R}$ to itself, if we’re considering $\mathbb{R}$ as a vector space over the field $\mathbb{Q}$ of rational numbers! This is a nifty little application within the sort of stuff we’ve been doing lately, and it really explains how we used Zorn’s Lemma when we showed that every vector space has a basis. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 9 Comments » 1. Re: earlier pi nonsense I don’t mean offend your mathematical sensibilities I’m was asking doesn’t something have to be falsifiable to be knowledge? And don’t you need some way to measure things to falsify them? I just thought Pi would be that unit. I guess it can be any arbitrary constant. I was just trying to think of something independent of the number system itself which is how I ended up with Pi. I wasn’t trying to make an assertion, but ask a question. But I guess was expecting not just a yes or no, but also some alternative explanation for the basis of knowledge. I meant it as a thought experiment, sorry if caused too much exacerbation. Even worse than philosophy, I come from economics, where everything is wrong and concept is more important than accuracy. Again, sorry for the heckling from the peanut gallery I get carried away sometimes. Comment by Kurt Osis | August 13, 2008 | Reply 2. $\pi$ is far from independent of the number system. It is what it is exactly because of the way the real number system is structured. Comment by | August 13, 2008 | Reply 3. I guess that is over my head. The relationship is dependent on the number system? I always thought, you know, if aliens imagined a circle regardless of their number system they would still have pi? if they don’t understand circles they’re going to have real trouble reading the plaque we put on Pioneer then. Ok well, i’ve given up, i’ve concluded knowledge doesn’t exist. Comment by Kurt Osis | August 13, 2008 | Reply 4. Kurt’s error is based on the thought that Pi is only about circles. Historically, that is how it was found millennia ago, by empirical measurement of round objects in physical space. Then abstracted by Archimedes as the limit of a series of circle approximations as polygons. Then as an abstract mathematical object which emerges from hundreds or thousands of different mathematical processes. His question is not stupid, merely naive, and can be cured by more reading and thought and working out of examples. At the deeper level, it is a philosophical category error to confuse “truth” and “proof” in the empirical world (science, engineering, technology) with “truth” and “Proof” in the axiomatic world. Let alone with “truth” and “proof” in the aesthetic, politico-legal, or revealed/religious magesteria. I do not denigrate naive questions. Great thinkers such as Feynman and EInstein excelled at asking them, and finding novel answers. Comment by | August 14, 2008 | Reply 5. JVP: Kurt is trying to drag a bunch of pseudo-mystical numerology here from where he started it over at Isabel’s place. There’s insightful-naïve questions like you’re talking about, and then there’s sheer nonsense. I’ll denigrate nonsense. Comment by | August 14, 2008 | Reply 6. John: There’s one thing I’m not and that’s pseudo-mystical. (that is to say, I don’t accept other people’s nonsense, but accidentally stumble into creating my own nonsense to ignorance) I have one real question. What is knowledge, and when can it be said to exist and not exist. In math I can far to ignorant to distinguish between cause and effect and coincidence. You point out when I am wrong, but you don’t explain how you arrive at your conclusions, so I am forced to accept assertions as fact. But being a very skeptical I am loathe to do so. I would very enjoy to learn how you think and what is wrong with my nonsense. I just wish you would explain it. “pi is far from independent of the number system. It is what it is exactly because of the way the real number system is structured.” I have no idea what that statement means. I just started reading “What is Mathematics” http://www.amazon.com/Mathematics-Elementary-Approach-Ideas-Methods/dp/0195105192 Maybe by the time I get to the end I will understand. Comment by Kurt Osis | August 15, 2008 | Reply 7. sorry the typos above, should read: “…my own nonsense [due] to ignorance…” “In math I [am] far [too] ignorant…” “I would very [much] enjoy…” Comment by Kurt Osis | August 15, 2008 | Reply 8. Big statements like “all knowledge comes from [foo]” are pseudo-mystical. Especially when [foo] is some silly little constant that people ascribe all sorts of significance too mostly because it’s the only recognizably “mathy” word they know. And people do that with $\pi$ all the time. The full derivation of $\pi$ from the ground up is much too long for a comment. I’ll get there when I get there. The basic idea is that it’s half the period of any solution to a certain natural differential equation. You don’t need any reference to the real world at all, especially not to idealized geometrical constructs which don’t actually exist in the real world in the first place. The questions that JVP referred to differ from yours in that they actually mean something. “Does all knowledge come from [foo]?” is a grammatically-correct sentence, but that’s about it. It has the veneer of epistemology, but it’s more at home with such classics as “Have you ever looked at your hands? I mean really looked at your hands?“ Comment by | August 15, 2008 | Reply 9. [...] Then the discussion moved from an unrelated post on Michael’s weblog to an unrelated post on mine. [...] Pingback by | October 16, 2008 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460040330886841, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/32055/when-can-we-factor-out-the-time-dimension/32058
## When can we factor out the time dimension? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) First of all, my knowledge of both GR and differential geometry is quite weak, so forgive me if the physics here doesn't make much sense. Let $(M, g)$ be a smooth, connected Lorentzian manifold of dimension $n$. Let $f: \mathbb{R}\to M$ be a smooth curve such that the pullback of $g$ through $f$ is everywhere negative (where we've chosen an orientation on $\mathbb{R}$); we say that $f$ is time-like. Say that we can "factor" $f$ out of $M$ if there exists a manifold $S$ of dimension $n-1$ and an isomorphism $M\simeq S\times \mathbb{R}$ so that the map $\mathbb{R}\to M\simeq S\times \mathbb{R}\to S$ is constant and the map $\mathbb{R}\to M\simeq S\times \mathbb{R}\to \mathbb{R}$ is the identity. Intuitively, this factorization exhibits $f$ as "time" in some reference frame, and $S$ as space. My question is: For which $(M, g)$ can every time-like path be factored out? Minkowski space seems like an obvious example unless I'm missing something; it seems one can take a tangent vector to $f$ at any point and consider a perpendicular subspace to that vector as $S$. I'd accept as an answer a characterization of all such $(M, g)$ in dimension $4$, or some nice sufficient condition on $M$ for factorization to always work. If the motivation isn't obvious already, this is supposed to codify the intuition that in my reference frame, I seem to be standing still -- and that the same is true for everyone else, even if they seem to me to be moving. My apologies if I've overloaded terms, or used them incorrectly. Added: Note that this condition is much stronger than stable causality; indeed, it certainly implies stable causality, as choosing any timelike path $f$ and then considering the given projection to $\mathbb{R}$ gives a global time function. However, I am asking for (1) a product structure on $M$ for each path $f$ and (2) in order to formalize the notion that I seem to be standing still (to myself), the projection of $f$ to $S$ must be constant. Added: I don't think global hyperbolicity suffices either. The theorem of Geroch (it and other splitting theorems are discussed here, for example) does indeed give a decomposition of $M$ as $\mathbb{R}\times S$. But I don't think this is enough. In particular, I am asking for the following---for every timelike path $f: \mathbb{R}\to M$, there is a product structure $M\simeq \mathbb{R}\times S$ such that the projection to $\mathbb{R}$ is a section of $f$, and that $f$ is constant upon projection to $S$. This is much stronger than Geroch's splitting theorem, as far as I can tell. Added: As the accepted answerer rightly points out in the comments to his question, I was wrong to claim that my condition is stronger than global hyperbolicity. They are in fact equivalent. - ## 2 Answers This sounds to me like you're asking that your spacetime admit a family of Cauchy surfaces (modulo annoyances like having $f$ be closed and acausal). There's a theorem of Geroch which guarantees that this is equivalent to global hyperbolicity. I don't think that globally hyperbolic and stably causal are equivalent conditions, but I'm not an expert on causality conditions in GR, so take this claim with a grain of salt. - Reading the wikipedia article, it seems to me that this implies that one path can be factored out in the sense I described; does this guarantee it for every path? – Daniel Litt Jul 15 2010 at 20:04 You're worried that $S$ may depend on $f$? It might in the metric category, but in the topological category, any two $S$ are certainly isomorphic. – userN Jul 15 2010 at 20:54 I'm worried that the projection of $f$ to $S$ might not be constant. If you read the question carefully, you'll see we need a different product structure for each $f$. – Daniel Litt Jul 15 2010 at 20:57 I still suspect the two conditions are essentially equivalent, as long as you're not worried about metric structure. We know that $M$ is globally hyperbolic, so let's equate $M = S \times \mathbb{R}$. Let $f_0$ be the "base" path, given by $t \mapsto (f_0(t),t)$. Now suppose we have some other path $f_1$, given by $t \mapsto (f_1(t),t)$. I claim that, as long as $S$ is path-connected, for each $t$, we can find a diffeomorphism $d_t: S \to S$ which moves $f_1(t)$ to $f_0(t)$. Moreover, we can make these diffs depend smoothly on $t$. That should give product structure you want... – userN Jul 15 2010 at 22:30 1 Take a path connecting the first point to the second point. Thicken the path to a neighborhood. Said neighborhood is diffeomorphic to the ball. It should at least be intuitive that you can always map the ball onto itself in a way that moves any given point to the origin. Just grab the point and drag it where you want it. – userN Jul 15 2010 at 22:42 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Edit The answer below is not correct. Upon further reflection, I believe that the correct causality condition is indeed global hyperbolicity and not the weaker stable causality. I believe this is a causality condition known as stably causal. You can read about this in Wald's "General Relativity" Chapter 8, in particular his Theorem 8.2.2. A spacetime $(M,g)$ is stably causal if there exists a continuous nowhere vanishing timelike vector $t$ such that the spacetime $(M,\tilde g)$ with $$\tilde g = g - t^\flat \otimes t^\flat,$$ where $t^\flat$ is the dual one-form to $t$ relative to $g$, has no closed timelike curves. Wald's Theorem 8.2.2 says that this is equivalent to the existence of a global time function on $M$. Theorem 8.2.2 A spacetime $(M,g)$ is stably causal if and only if there is a differentiable function $f$ on $M$ such that its gradient is a past directed timelike vector field. There is a whole hierarchy of causality conditions in GR. Global hyperbolicity implies stable causality and this in turn implies strong causality. Global hyperbolicity might be too strong. - According to Wikipedia "stably causal" is something different, or at least not obviously the same. (en.wikipedia.org/wiki/…) The book doesn't seem to be on google books; can you elaborate? – Daniel Litt Jul 15 2010 at 19:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9566347002983093, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/254138/how-to-find-the-maximum-number-of-vertices-in-a-tree-with-respect-to-maximum-pat
How to find the maximum number of vertices in a tree with respect to maximum path length and maximum degree value Given a tree, find the maximum number of vertices $v$ in that tree using the maximum path length $p$ and a maximum degree that applies to all vertices $d$. Assuming that I drew my test tree correctly, in the case where the path is no longer than $5$, and the vertex degree can be no higher than $3$, then the maximum number of vertices should be $14$. Is this right? I have tried other values for $p$ and $d$ and drawing the trees for them, but don't see the pattern in the data. I don't know which formula I can use to derive the maximum number of vertices when $p$ and $d$ are unknown. I would like to know what the formula is, if it exists. - 1 Answer The easy case is when $p$ is even. In that case you want $d$ paths coming up from the root (level 0) and each vertex until layer $\frac p2$ has $d-1$ branches upward. This gives $1+d+(d-1)d +(d-1)^2d +\ldots d(d-1)^ {\frac p2-1}=1+d+d\frac{(d-1)^{\frac p2}-1}{d-2}$ If $p$ is odd, you can have one branch of height $\frac {p+1}2$ and all the other branches can have height $\frac {p-1}2$. You should be able to follow the above logic to get the expression in that case. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301643967628479, "perplexity_flag": "head"}
http://terrytao.wordpress.com/tag/margulis-lemma/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘Margulis lemma’ tag. ## 254A, Notes 9: Applications of the structural theory of approximate groups 13 November, 2011 in 254A - Hilbert's fifth problem, math.GR, math.MG | Tags: approximate groups, Bishop-Gromov inequality, Gromov's theorem, Margulis lemma, nilpotent groups | by Terence Tao | 14 comments In the last set of notes, we obtained the following structural theorem concerning approximate groups: Theorem 1 Let ${A}$ be a finite ${K}$-approximate group. Then there exists a coset nilprogression ${P}$ of rank and step ${O_K(1)}$ contained in ${A^4}$, such that ${A}$ is covered by ${O_K(1)}$ left-translates of ${P}$ (and hence also by ${O_K(1)}$ right-translates of ${P}$). Remark 1 Under some mild additional hypotheses (e.g. if the dimensions of ${P}$ are sufficiently large, or if ${P}$ is placed in a certain “normal form”, details of which may be found in this paper), a coset nilprogression ${P}$ of rank and step ${O_K(1)}$ will be an ${O_K(1)}$-approximate group, thus giving a partial converse to Theorem 1. (It is not quite a full converse though, even if one works qualitatively and forgets how the constants depend on ${K}$: if ${A}$ is covered by a bounded number of left- and right-translates ${gP, Pg}$ of ${P}$, one needs the group elements ${g}$ to “approximately normalise” ${P}$ in some sense if one wants to then conclude that ${A}$ is an approximate group.) The mild hypotheses alluded to above can be enforced in the statement of the theorem, but we will not discuss this technicality here, and refer the reader to the above-mentioned paper for details. By placing the coset nilprogression in a virtually nilpotent group, we have the following corollary in the global case: Corollary 2 Let ${A}$ be a finite ${K}$-approximate group in an ambient group ${G}$. Then ${A}$ is covered by ${O_K(1)}$ left cosets of a virtually nilpotent subgroup ${G'}$ of ${G}$. In this final set of notes, we give some applications of the above results. The first application is to replace “${K}$-approximate group” by “sets of bounded doubling”: Proposition 3 Let ${A}$ be a finite non-empty subset of a (global) group ${G}$ such that ${|A^2| \leq K |A|}$. Then there exists a coset nilprogression ${P}$ of rank and step ${O_K(1)}$ and cardinality ${|P| \gg_K |A|}$ such that ${A}$ can be covered by ${O_K(1)}$ left-translates of ${P}$, and also by ${O_K(1)}$ right-translates of ${P}$. We will also establish (a strengthening of) a well-known theorem of Gromov on groups of polynomial growth, as promised back in Notes 0, as well as a variant result (of a type known as a “generalised Margulis lemma”) controlling the almost stabilisers of discrete actions of isometries. The material here is largely drawn from my recent paper with Emmanuel Breuillard and Ben Green. Read the rest of this entry » ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8665198087692261, "perplexity_flag": "middle"}