url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/revisions/5638/list
## Return to Answer 5 added 1553 characters in body As other people in this thread have pointed out, it's unsatisfying to make an automorphism tower that only stabilizes transfinitely as a direct limit, when all of the finite terms of the tower are abstractly isomorphic to the base group $G$. I Googled around a bit more and came back to the same two sources, Thomas' book, and this time a joint result of Hamkins and Thomas which is in chapter 8 of the book. If an automorphism tower stabilizes after exactly $n \in \mathbb{N}$ steps in the direct limit sense, then it also stabilizes after exactly $n$ steps in the weaker abstract isomorphism sense. (Otherwise the direct limit "wouldn't know to stop".) Hamkins and Thomas do better than that. For any two ordinals $\alpha$ and $\beta$, which may or may not be finite numbers, they find one group $G$ whose automorphism tower has height $\alpha$ and $\beta$ in two different models of ZFC set theory. (Whether it's really the "same" group in different worlds is unclear to me, but their models are built to argue that it is so.) I would suppose that it is possible to make a tower without isomorphic terms by taking a product of these groups, even without the two-for-one property. Other than one paper on the Grigorchuk group by Bartholdi and Sidki, I haven't found anything on automorphism towers of finitely generated groups. The Grigorchuk group has a countably infinite tower, but I'd have to learn more to know whether the terms are abstractly isomorphic. 4 More clarification; added 1 characters in body I remember that my old grad classmate from Berkeley, Joel Hamkins, worked on the transfinite version of this problem. The Automorphism Tower Problem, by Simon Thomas, is an entire book on this subject. The beginning of the book gives the example of the infinite dihedral group $D_\infty$, in the sense of $\mathbb{Z}/2 \ltimes \mathbb{Z}$. It says that the automorphism tower of this group has height $\omega+1$. It also treats Joel's theorem, which says that every automorphism tower does stabilize, transfinitely. A Proceedings paper with the same author and title says that Wielandt showed that every finite centerless group has a finite automorphism tower. An improved answer: Simon's book later shows that the automorphism tower of the finite group $D_8$ has height $\omega+1$, and that for general finite groups no one even knows a good transfinite bound. (The $8$ may look like a typo for $\infty$, but it's not :-).) Apparently the centerless condition is essential in Wielandt's condition. Also, to clarify what these references mean by the automorphism tower, they specifically use the direct limit of the conjugation homomorphisms $G \to \mbox{Aut}(G)$. $D_8$ is abstractly isomorphic to its automorphism group. This is a different version of the question that I suppose does not have a transfinite extension. Section 5 of Thomas' book implies that it's an open problem whether the tower terminates in this weaker sense, for finite groups. Finally an arXiv link to Joel Hamkins' charming paper, Every group has a terminating transfinite automorphism tower. 3 arXiv is a better citation I remember that my old grad classmate from Berkeley, Joel Hamkins, worked on the transfinite version of this problem. The Automorphism Tower Problem, by Simon Thomas, is an entire book on this subject. The beginning of the book gives the example of the infinite dihedral group $D_\infty$, in the sense of $\mathbb{Z}/2 \ltimes \mathbb{Z}$. It says that the automorphism tower of this group has height $\omega+1$. It also treats Joel's theorem, which says that every automorphism tower does stabilize, transfinitely. A Proceedings paper with the same author and title says that Wielandt showed that every finite centerless group has a finite automorphism tower. An improved answer: Simon's book later shows that the automorphism tower of $D_8$ has height $\omega+1$, and that for general finite groups no one even knows a good transfinite bound. Apparently the centerless condition is essential in Wielandt's condition. Also, a direct an arXiv link to Joel Hamkins' charming paper, Every group has a terminating transfinite automorphism tower. 2 added 274 characters in body I remember that my old grad classmate from Berkeley, Joel Hamkins, worked on the transfinite version of this problem. The Automorphism Tower Problem, by Simon Thomas, is an entire book on this subject. The beginning of the book gives the example of the infinite dihedral group $D_\infty$, in the sense of $\mathbb{Z}/2 \ltimes \mathbb{Z}$. It says that the automorphism tower of this group has height $\omega+1$. It also treats Joel's theorem, which says that every automorphism tower does stabilize, transfinitely. A Proceedings paper with the same author and title says that Wielandt showed that every finite centerless group has a finite automorphism tower. An improved answer: Simon's book later shows that the automorphism tower of $D_8$ has height $\omega+1$, and that for general finite groups no one even knows a good transfinite bound. Apparently the centerless condition is essential in Wielandt's condition. Also, a direct link to Joel Hamkins' charming paper, Every group has a terminating transfinite automorphism tower. 1 I remember that my old grad classmate from Berkeley, Joel Hamkins, worked on the transfinite version of this problem. The Automorphism Tower Problem, by Simon Thomas, is an entire book on this subject. The beginning of the book gives the example of the infinite dihedral group $D_\infty$, in the sense of $\mathbb{Z}/2 \ltimes \mathbb{Z}$. It says that the automorphism tower of this group has height $\omega+1$. It also treats Joel's theorem, which says that every automorphism tower does stabilize, transfinitely. A Proceedings paper with the same author and title says that Wielandt showed that every finite group has a finite automorphism tower. Also, a direct link to Joel Hamkins' charming paper, Every group has a terminating transfinite automorphism tower.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476127028465271, "perplexity_flag": "head"}
http://cms.math.ca/10.4153/CJM-2002-032-2
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CJM Abstract view # Spherical Functions for the Semisimple Symmetric Pair $\bigl( \Sp(2,\mathbb{R}), \SL(2,\mathbb{C}) \bigr)$ Read article [PDF: 406KB] http://dx.doi.org/10.4153/CJM-2002-032-2 Canad. J. Math. 54(2002), 828-865 Published:2002-08-01 Printed: Aug 2002 • Tomonori Moriyama Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: HTML LaTeX MathJax PDF PostScript ## Abstract Let $\pi$ be an irreducible generalized principal series representation of $G = \Sp(2,\mathbb{R})$ induced from its Jacobi parabolic subgroup. We show that the space of algebraic intertwining operators from $\pi$ to the representation induced from an irreducible admissible representation of $\SL(2,\mathbb{C})$ in $G$ is at most one dimensional. Spherical functions in the title are the images of $K$-finite vectors by this intertwining operator. We obtain an integral expression of Mellin-Barnes type for the radial part of our spherical function. MSC Classifications: 22E45 - Representations of Lie and linear algebraic groups over real fields: analytic methods {For the purely algebraic theory, see 20G05} 11F70 - Representation-theoretic methods; automorphic representations over local and global fields
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7003493905067444, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/204839/find-the-formula
# Find the formula 10 feet of common 2-inch-bore lead piping weighs 50 lb. What is the formula for the weight of x feet of such lead piping? --Sawyer, Mathematician's Delight I am not sure about what I am asked to do here. It is known that $$10F*2=50$$ and I am asked to write it down as $$XF*2=W$$ or is it something else? - ## 2 Answers Let $W$ be the weight as a function of the length $L$. So you know that when $L= 10$ ft then $W = 50$ lb. We sometimes will write this as $W(10) = 50$. Now if $10$ ft weighs $50$ lb, then what would $5$ ft weigh? Well, half of that, i.e. $25$ lb. What would $1$ foot weigh? Well, one tenth of that, i.e. $5$ lb. So $1$ foot of pipe weighs $5$ lb. $2$ ft would weight the double of that. Can you find the formula now? (Note that the fact that the pipe is $2$ in thick doesn't matter.) - First of all, the 2 in the "2-inch-bore" is utterly irrelevant and has no place in any formula. Think of it as "10 feet of (whatever-you-want) weighs 50 pounds: $x$ feet weighs how many pounds?" You want a formula for "how many", and you want that formula to involve $x$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9738687872886658, "perplexity_flag": "middle"}
http://gilkalai.wordpress.com/2008/07/10/pushing-behrend-around/?like=1&source=post_flair&_wpnonce=5092e672ce
Gil Kalai’s blog ## Pushing Behrend Around Posted on July 10, 2008 by Erdos and Turan asked in 1936: What is the largest subset $S$ of {1,2,…,n} without a 3-term arithmetic progression? In 1946 Behrend found an example with $|S|=\Omega (n/2^{2 \sqrt 2 \sqrt {\log_2n}} \log^{1/4}n.)$ Now, sixty years later, Michael Elkin pushed the the $log^{1/4} n$ factor from the denominator to the enumerator, and found a set with  $|S|=\Omega (n \log^{1/4}n/2^{2 \sqrt 2 \sqrt {\log_2n}} ).$ ! Here is a description of Behrend’s construction and its improvment as told by Michael himself: “The construction of Behrend employs the observation that a sphere in any dimension is convexly independent, and thus cannot contain three vectors  such that one of them is the arithmetic average of the two other. The new construction replaces the sphere by a thin annulus. Intuitively, one can produce larger progression-free sets because an annulus of non-zero width contains more integer points than a sphere of the same radius does. However, unlike in a sphere, the set of integer points in an annulus is not necessarily convexly independent. To counter this difficulty I show that as long as the annulus is sufficiently thin, the set U of its integer points contains a convexly independent subset W whose size is at least a constant fraction of the size of U. The subset W is, in fact, the exterior set Ext(U) of the set U. The set U above is the set of integer points of the the intersection of a very thin annulus with a cube. The (minimum) dimension k of the space $R^k$  in which this body has non-zero volume is not constant, but rather it tends to infinity logarithmically with the radius of the annulus. Consequently, it becomes not trivial to estimate the volume of this body, leaving alone the the number of integer points that it contains.  In addition, most known estimates for the discrepancy  between the number of integer points and the volume assume that the dimension is fixed, and thus these estimates are inapplicable in this case.  Moreover, since the annulus is very thin, its volume is not much larger than its surface area, and thus crude estimates of the discrepancy between the number of integer points and the volume do not suffice. Showing more precise estimates involves a rather delicate analysis.” ### Like this: This entry was posted in Combinatorics, Updates and tagged Arithmetic progressions, Roth's theorem, Szemeredi's theorem. Bookmark the permalink. ### 10 Responses to Pushing Behrend Around 1. Gil Kalai says: Two recent followups to Elkin’s work are of interest. In October, Ben Green and Julia Wolfe published an alternative proof of Elkin’s result: see http://arxiv.org/abs/0810.0732 A few weeks ago Kevin O’Bryant extended the results to progressions of length k: see http://arxiv.org/abs/0811.3057v2 It is a great mystery what the answer of Erdos-Turan problem is. Not only that there is a huge gap between the lower and upper bounds, I am not aware of any convincing or even unconvincing heuristic arguments, or ideas-on-record or even thoughts-on-record about where is the truth. This would be a very nice subject for a mathematical dicussion, or brain-storming, or massive collaboration, or small talk, but it is not clear how to start. If you have any idea on the matter, you are most welcomed to express them. 2. gowers says: Gil, You’ve given me an idea for a second massive-collaboration project to try if the first one fails (or even more so if it succeeds). It’s not quite what you suggest here, but it’s closely related. I won’t say more at this stage, except that I have two heuristic arguments that the correct bound, at least for progressions of length 3, should be roughly what is given b the Behrend bound. One is the unconvincing argument that I once thought of a construction to obtain a lower bound for the triangle-removal problem and the bound that came out was exactly the Behrend bound. This argument is unconvincing because when I thought about it a bit more it became clear to me that the construction was actually very similar to Behrend’s. However, it still suggested to me that Behrend’s construction wasn’t just a clever trick, but something rather natural. The other argument I’m afraid I’ll have to be even less precise about, but I recently came up with a slightly different way of proving Roth’s theorem, which I haven’t actually written up properly so can’t guarantee the correctness of, but it looks from that argument as though there is a chance of using strong bounds for Freiman’s theorem (which seem highly plausible, even though nobody has yet managed to prove them) to obtain Behrend-type bounds in Roth. Even this might be a good “open source” project, though it would be too technical to be massively collaborative. 3. Gil Kalai says: Dear Tim, thanks a lot for the remark! Any heuristic argument about the answer to Erdos-Turan problem for k=3 and in particulat heuristic arguments suggesting that Behrend’s bound gives the true behavior would be great. (I am a little confused about the description of your first argument since it seems you describe a construction similar to Behrend’s and not an argument for a macthing bound from the other direction.) In the second heuristics, I suppose you mean that the strong bounds for Freiman’s theorem may give Behrend-type upper bounds for the Erdos-Turan problem; namely show that a set S in {1,2,…,n} without a 3-term A P has at most n/g(n) elements where g(n) behaves like $exp (log^c n)$ For the interested readers here is a brief description of the state of affairs: It is known that for some constant c >0 a subset of {1,2,…,n} of size $n/(log^cn)$ must contain an AP of size 3 (The best value of c is due to Bourgain and it is below 1.) Getting this for c>1 will apply for 3-terms AP in primes and it looks out of reach. For some constant 0<c<1 (c=1/2 if you take the log with base 2), it is known (Behrend) that there are sets S in {1,2,…,n} of size $n/exp (log^cn)$ without a 3-term AP. So the gap is huge!! If I have to draw a line in the middle I would ask: For the correct value of g(n), is g(g(n)) super logarithmic or sublogarithmic? 4. gowers says: Just to clarify the first argument, what I meant was that when I first thought of the other construction, I was hoping that it would give a better bound than the Behrend bound, but then it gave exactly the same bound. That led me to think that perhaps there was a natural obstacle there, which might be the obvious one that it is the correct bound. When I realized that the new construction was fairly similar to Behrend’s, I felt that less strongly (since it no longer felt like an independent piece of experimental evidence) but I still had the feeling that the alternative construction had arisen in a natural enough way to suggest that that bound could be the right one. Previously, Behrend’s construction had felt like a piece of magic that left me with no clue about whether or not it could be improved. 5. Gil Kalai says: Dear Tim, You wrote “I recently came up with a slightly different way of proving Roth’s theorem, which I haven’t actually written up properly so can’t guarantee the correctness of, but it looks from that argument as though there is a chance of using strong bounds for Freiman’s theorem (which seem highly plausible, even though nobody has yet managed to prove them) to obtain Behrend-type bounds in Roth. Even this might be a good “open source” project, though it would be too technical to be massively collaborative.” This is extremely interesting and I would be very interested to hear what these strong bounds for Freiman are. Proving a Behrend-type bounds in Roth seems completely off-scale and out-of-reach. (And I do not see any strong reason to believe that this is were the truth is.) As I said, as something orthogonal to collective (or massive) efforts to prove something (like your recent polymath 1) or formulate a conjecture (polymath2) I wonder if mathematically-looking-beyond-the-horizon-discussion is possible and if it can be fruitfull. (I tend to think that it is possible, not particularly fruitfull, not entirely respectible, but can be fun anyway.) I may try to initiate such a discussion at some later time. 6. Pingback: An Open Discussion and Polls: Around Roth’s Theorem « Combinatorics and more 7. Warren D. Smith says: Dear Michael Elkin. I believe your improvement on Behrend can be improved further. In fact, one can get far more than a c*log(n)^(1/2) factor improvement, we get more like a factor exp(c*log(n)^(1/2)). The two ideas needed are as follows. IDEA#1 the set of integer points on a sphere of radius R in n-dimensions, is uniformly distributed angularly if n>=3 when R becomes large. (Known fact.) That means you can use integrals (e.g over spherical caps) instead of sums, etc for purposes of obtaining estimates. IDEA#2 Forget this idea of making all the points in your spherical shell be “convexly independent” (by which, I assume, you mean all of them are extreme points). Instead, proceed as follows. Choose a sphere of radius r and thickness s of the shell. Pick each integer point inside that shell to be in your set or not, by tossing a coin. Argue that the expected number of violations of a+b=2c you will get in this way, is going to be less than a constant times the number of points in the shell. The argument can be made because the only way to get a violation is if the two points a and b are close together which is a small-volume spherical cap, i.e. small probability of occurrence. Throw them out; there are now no violations. In this way, at the cost of a constant factor due to the discards and cointossing, we have made (or anyhow proved the existence of) an average-free set. The gain far exceeds the loss because you can actually make the shell thickness s be a constant power of r instead of logarithmic. IDEA #3 By the way, Behrend was stupid to use a sphere based on the quadratic polynomial x^2. He should have based it on (x-H) * (x-H+1) for a suitable number H about halfway thru the allowed digit range. This yields a big constant factor improvement. Warren D. Smith 8. Warren D. Smith says: Sorry, I miscalculated (above comment) it appears this idea does not improve over Elkin. 9. Pingback: Roth’s Theorem: Tom Sanders Reaches the Logarithmic Barrier | Combinatorics and more 10. Pingback: A Couple Updates on the Advances-in-Combinatorics Updates | Combinatorics and more • ### Blogroll %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307562708854675, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/106953/number-of-possible-prufer-codes
# Number of possible Prüfer codes I am trying to solve the following problem in my book: (Code stands for Prüfer code) Consider labelled trivalent rooted trees $T$ with $2n$ vertices, counting the root labeled $2n$. The labels are chosen in such a way that the procedure leading to $P(T)$ has $1,2,\cdots 2n-1$ as first row. How many possible codes $P(T)$ are there? 1. As I understand the question, it is asking for the number of ways of labeling the tree such that if we run the Prüfer algorithm, first time the vertex labeled $1$ is removed, second time the vertex labeled $2$ is removed and so on, until the vertex labelled $2n-1$ is removed. Is my understanding of the question correct? 2. There are going to be $n+1$ monovalent vertices in such trees, which means there are $n+1$ ways to label a vertex as $1$. But how many ways will be there to label a vertex as $2$? As soon as a vertex has been labeled as $1$, and removed in the first iteration there are either $n$ monovalent vertices remaining or $n+1$ monovalent vertices remaining. So there seem to be two possibilities for labeling $2$: either among $n$ or among $n+1$. How do I decide the number of ways of choosing $2$. 3. An example of such a rooted trivalent tree is: Thanks. - Presumably $P(T)$ refers to the Prüfer code for $T$? What does "as first row" refer to? In the Prüfer code as described e.g. in the Wikipedia article you link to, there are no rows. (Note that you misspelled the name of the code; if you don't have umlauts on your keyboard, you can copy them e.g. from the Wikipedia article.) – joriki Feb 8 '12 at 8:13 Yes $P(T)$ means Prüfer code. By the first row we mean the finite sequence of monovalent vertices of smallest label which we remove to get a smaller tree. (I have linked the description of the Prüfer codes in my book instead of the wikipedia link now.) – Shahab Feb 8 '12 at 8:29 – joriki Feb 8 '12 at 8:51 Since you've put up a bounty for this, perhaps you should explain what you'd like to see in addition to my answer? – joriki Feb 10 '12 at 16:34 Oh I don't mean to say that your explanation is insufficient. I would just like to make sure that our understanding of the question is correct by getting some other opinions. – Shahab Feb 11 '12 at 2:18 ## 1 Answer I'm a bit surprised that this exercise appears so early in a combinatorics book without any hints. Regarding your first question, yes, I understand the question the same way. What I don't understand is how you originally expected us to help you in interpreting the question without linking to your book or explaining what the "first row" was. I'd suggest to always directly link to any book you're referring to if it's available online, and also not to change the spelling if you quote from a book – I searched for the problem before you provided the link and would have found it if you hadn't changed the spelling of "labeled" to "labelled" :-) Regarding your second question: What you want to count isn't the ways of labelling the vertices of a given graph; you want to count all such graphs with all labellings for all structures; so I don't think this approach will get you very far. The problem itself doesn't define what exactly a rooted trivalent tree is. In particular, it's not immediately clear whether the root itself can have degree $1$ or $3$ or only one of the two. Unfortunately Google Books doesn't show (at least to me) the page with Figure 14.3 that the problem refers to, which would probably resolve this. However, this page seems to indicate that the root may be required to have degree $1$, which would fit with how this book seems to be use the term, and also with this solution, which apparently makes that assumption. The given condition is equivalent to the condition that the vertices are labelled in decreasing order as viewed from the root. Thus we're counting the number of labelled full/proper/strictly binary trees with vertex labels increasing from the root. The recurrence $$a_n=\frac12\sum_{k=1}^{n-1}\binom{2n-2}{2k-1}a_na_{n-k}$$ derived in the solution linked to yields the initial terms $1,1,4,34,496,\dotsc$, which lead to OEIS sequence A002105. - Thank you for taking the time out to answer me. I am really grateful. I have uploaded a figure which appeared as an example in the book for a rooted trivalent tree, and which confirm your conjecture of the root being of degree 1. However recurrence relations appear later down in the book so I don't think its expected that they are to be used in the solution. – Shahab Feb 8 '12 at 15:48 @Shahab: You're welcome. I agree that this doesn't seem like the right sort of solution at this point in the book, but as you can see on the page I linked to, the explicit solution for the recurrence relation is a rather complicated expression involving several powers and Bernoulli numbers, so it also seems unlikely that this could be derived by more suitable means. It could be that a) this was meant as an advanced exercise, b) there's something wrong with the solution, c) we're misunderstanding the question or d) the authors got the solution wrong and erroneously thought it was easy. – joriki Feb 8 '12 at 16:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9671257138252258, "perplexity_flag": "head"}
http://inperc.com/wiki/index.php?title=Homology_of_homotopic_maps
This site contains: mathematics courses and book; covers: image analysis, data analysis, and discrete modelling; provides: image analysis software. Created and run by Peter Saveliev. # Homology of homotopic maps ### From Intelligent Perception We know that if the realizations of two cell complexes are homeomorphic then their homology groups are isomorphic: $$|K| ≈ |L| => H_*(K) = H_*(L).$$ So, there is some "room for error" in terms of homology. But what about cell maps and their homology maps? When do two different maps induce the same linear operator/homomorphism on homology? Let's consider an example. Let f and g be two maps of the circle, represented by a triangle, to itself: g is the identity and f is a rotation. The maps are different and the corresponding chain maps are different too, but the homology maps are the same: $$f_*(a + b + c) = a' + b' + c',$$ $$g_*(a + b + c) = a' + b' + c'.$$ Now, $f$ can be "continuously transformed" into $g$. Indeed, one can gradually slide $f(x)$ toward $g(x)$. In fact it's easy to think of a simple formula: $$F(t,x) = (1-t)f(x) + tg(x),$$ where $t$ is "time". This transformation is called homotopy. This kind of gradual transformation is easier to see when the maps don't have to be simplicial, such as these maps of the circle to the ring: They can be moved and stretched as if they were rubber bands. The first four maps can be "homotoped" into each other and for these $$f_*(a) = a'$$ but not the last one and for this $$f_*(a) = 0.$$ Theorem. If two maps are homotopic, they induce the same homology maps: $$f \sim g => f_* = g_*.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441717863082886, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/61814/ask-for-recommendations-for-textbook-on-mathematical-logic/61825
Ask for recommendations for textbook on mathematical logic Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I studied mathematical logic using a book not written in English. I would now like to study it again using a textbook in English. But I hope I can read a text that is similar to the one I used before, so I ask here for recommendations. Any recommendation will be appreciated. The characters of the mathematical logic book I used before is as follows: 1, Formal. Everything is formal, introduced from very beginning, such as (I'm not clear if my expression is correct in English, that's why I need an English book to remedy this deficiency) basic bricks of mathematical logic (symbol, formulas, predicate word, logic word, constrained variable, etc.), the formation rules of propositional/predicate logic (how well-formed formula is constructed recursively), differnt systems of propositional/predicate logic such as P, P*. F, F* and their relation, formal reasoning rules in these systems(e.g., in P, there are five formal reasoning rules: $(\in),(\tau),(\neg),(\to_-) and (\to_+)$). 2, comprehensive In addition to the conceptions in 1, It introduces also Sheffer vertical, function word and equality word. calculus of propositional/predicate logic including replacement of equal formula, the replacement theorem, normal forms, Skolem normal forms, Gentzen normal forms, non-embedding normal forms, reduction of logic calculus. Reliability. Assignment. Completeness of propositional/predicate (with equality word) logic. Compactness theorem and Lowenheim-Skolem theorem. Independency. 3, many examples in this book, there are many examples of calculus of propositional/predicate logic using the formal reasoning rules of a specified system. It also introduces actual formal mathematical systems such as elementary algebra, natural numbers, definition of formal symbols, etc. It used a "diagonal" form of proof to prove things. I hope to have a book in English similar to this one, that is, possesses these charactors. Could you please recommendate one? It would be better if the recommendated textbook can also contain some motivations and some modern things(the book I read was written 30 years ago). Thanks! - 1 Some standard suggestions may be found here: mathoverflow.net/questions/44620/… – Timothy Chow Apr 15 2011 at 14:02 8 This is a sufficiently international forum that it could possibly be helpful for you to name the original book. – Alexander Woo Apr 15 2011 at 19:25 @Alexander Woo: The original book is "数理逻辑基础"(basics of mathematical logic), written by 胡世华(Hu Shihua) and 陆钟万(Lu zhongwan), published by Science Press, China. The first author, who is a academician, was said to be a son of a premier of former Beiyang government of China. – zzzhhh Apr 16 2011 at 7:56 7 Answers I was going to recommend the English translation of the two volume sequence by Cori and Lascar. But after reading again your message it is highly possible that this is the text you used. I really like these two introductory books. - The construction rule for proposition formulas -- Definition 1.2 -- is the same as that of P* in the book I read before. It would be nice if this definition is followed by the eleven formal reasoning rules ($(\in),(\tau),(\neg),(\wedge_-),(\wedge_+),(\vee_-),(\vee_+),(\to_-),(\to_+),(\leftrightarrow_-) \rm{and} (\leftrightarrow_+)$), and then by examples such as $A\to B|--|\neg A\vee B$ and DeMorgen laws that not only deliver useful theorems of reasoning frequently used in each branch of math but also illustrate the usage of these formal reasoning rules. (to be continued) – zzzhhh Apr 16 2011 at 11:30 I'm surprised to find that no book introduces these formal reasoning rules, not to mention examples. After all, this is the most similar book so far. – zzzhhh Apr 16 2011 at 11:30 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For really learning how to do mathematics in a formal logic, I suggest to look at one of the theorem provers and read their manual or tutorial. For instance, you can take a look at HOL-light. My experience is that books about logic, fall short if it comes to the art of really doing mathematics in logic. You asked about examples. If you take a book about logic, you will probably not find an example of a proof of a well known theorem in a formal logic. However, you will find it in the manuals of the theorem provers. For me, some things got a place, when I studied HOL-light. Regards, Lucas - This is really an interesting thing. I'll download it and make a study of it. Thank you. – zzzhhh Apr 16 2011 at 8:44 I suggest "A Course in Mathematical Logic for Mathematicians" : http://www.amazon.com/Course-Mathematical-Mathematicians-Graduate-Mathematics/dp/1441906142/ref=sr_1_1?s=books&ie=UTF8&qid=1302892207&sr=1-1 - I would suggest "A concise introduction to mathematical logic" by Rautenberg, Wolfgang. See Springer website at http://www.springer.com/mathematics/book/978-1-4419-1220-6 . I think it satisfies most of your requirements. - Boolos and Jeffrey's book Computability and Logic may be of interest. In some respects its aim is breadth rather than depth. For example, the chapter on forcing does just enough to prove one interesting theorem (in arithmetic, not in set theory), and similar things are true of other topics. It seems to be addressed to mathematicians who don't know the topic and who want to find out what is of interest in the subject, not to people who want to learn all the skills needed to work in that area. - The Better introduction I know is P. Johnstone "Notes on Logic and Set Theory" (Cambridge Mathematical Textbooks) - Maybe it's not exactly what you had in mind (well, it was written about 30 years ago as well), but a nice book too. H.D. Ebbinghaus, J. Flum, W. Thomas, Mathematical logic (Undergraduate texts in mathematics). Springer, 1984. http://books.google.com/books?id=VYLA8m7cqYcC -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363257884979248, "perplexity_flag": "middle"}
http://www.birs.ca/events/2010/5-day-workshops/10w5025
# Optimal transportation and applications (10w5025) Arriving Sunday, April 18 and departing Friday April 23, 2010 ## Organizers Alessio Figalli (The University of Texas at Austin) Yuxin Ge (Universite Paris Est Creteil) Young-Heon Kim (University of British Columbia) Robert McCann (University of Toronto) Neil Trudinger (Australian National University) ## Objectives The purpose of the workshop is to bring together several subgroups working in the different features of mass transportation theory,to report on recent progress,identify the questions and issues which will drive the next wave of research, and offer an opportunity for the exchange of ideas and cross-fertilization. The proposed gathering will include scientists working in geometrical aspects of the theory of elliptic Monge-Ampere equations, geometers, analysts, applied mathematicians and selected experts in economics, meteorology, and design. Special attention will be giving to the inclusion of young researchers, minorities and women. The smoothness of optimal transport maps is an important issue in transportation theory since it gives information about qualitative behavior of the map, as well as simplifying computations and algorithms in numerical and theoretical implementations. Thanks to the results of Brenier and McCann, it is well known that the potential function of the map satisfies a Monge-Ampere type equation, an important fully nonlinear second order elliptic PDE arising in differential geometry. In the case of the quadratic cost function in Euclidean space, pioneering papers in this field are due to Delanoe, Caffarelli and Urbas. Very recently, Ma, Trudinger and Wang [2005] discovered a mysterious analytical condition (by now called (A3S) or the Ma-Trudinger-Wang condition) to prove regularity estimates for general cost functions. Costs functions which satisfy such a condition are called regular. At this point, Loeper [2007] gave a geometric description of this regularity condition, and he proved that the distance squared on the sphere is a uniformly regular cost, giving the first non-trivial example on curved manifolds. The Ma-Trudinger-Wang tensor is reinterpreted by Kim-McCann [2007] in an intrinsic way, and they show that it can be identified as the sectional curvature tensor on the product manifold equipped with a pseudo-Riemannian metric with signature $(n,n)$. Finally, recent results of Loeper-Villani [2008] and Figalli-Rifford [2008] show that the regularity condition on the square distance of a Riemannian manifold implies geometric results, like the convexity of the cut-loci. This new progress opens several directions in optimal transportation theory, especially its geometric aspects. Links from optimal transport to geometric analysis, including to the theory of Ricci curvature and Ricci flow, have begun to emerge because of recent works of Lott, Villani, Sturm, Topping and McCann. The possibility to define useful analogs of such concepts in a metric measure space setting has been a tantalizing goal, only partly realized so far. Still this progress, together with the original contribution due to Otto on the formal Riemannian structure of the Wasserstein space and its application on PDE, is having a strong impact on the research community. Optimal transportation has also provided a new and simpler way to establish sharp geometric inequalities like the isoperimetric theorem, optimal Sobolev inequalities and optimal Gagliardo-Nirenberg inequalities. The proposed workshop will facilitate the interplay between the theory of optimal transport, geometric analysis and nonlinear partial differential equations. On the other hand, among the numerous applications of optimal transport, we concentrate on economics, meteorology, and design problems: 1. Mass transportation duality is useful in formulating the problem of existence, uniqueness and purity for equilibrium in hedonic models. Recent works of Ekeland, Chiappori, McCann and Nesheim have shown that optimal transportation techniques are powerful tools for the analysis of matching problems and hedonic equilibria. Work of Rochet and Carlier also exposed applications to the principal agent problem - a central paradigm in microeconomic theory, which models the optimal decision problem facing a monopolist whose must act based on statistical information about her clients. Although existence has generally been established in such models, characterization of the solutions, including uniqueness, smoothness, and comparative statics remain pressing open questions. Transportation theory has a wide range of further potential applications in econometrics, urban economics, adverse selection problems and nonlinear pricing. These deserve to be highlighted for the upcoming generation of mathematical researchers by experts in these fields. 2. Geophysical dynamics seeks to understand the evolution of the atmosphere and oceans, which is fundamental to weather and climate prediction. It has been shown that mass transportation theory can be applied to fluid dynamical problems, for instance those governing the large-scale behaviour of the atmosphere and oceans (Cullen [2006]). Here discontinuous solutions find important applications as models for atmospheric fronts, where the point is to analyze the geometry and dynamics of the discontinuity. The theory can also be given a geometrical interpretation, which has led to important extensions in its applicability, and can be used to investigate the qualitative impact of geographical formations, such as mountain ranges. A related open problem to which mass transportation is relevant is the incorporation moisture and thermodynamics into the dry dynamics, to model, e.g., rainstorms. 3. Finally, transportation has a number of promising applications in engineering design - ranging from the construction of reflector antennas or shapes which minimize wind resistance, to problems in computer vision. Oliker and X-J Wang have pioneered the use of transportation theory in reflector design, while Plakhov has been exploring novel applications in aerodynamics. Image registration offers medical applications, in which the goal is establish a common geometric reference frame between two or more diagnostic images captured at different times. Based on the mass transportation theory, Tannenbaum and his group developed powerful algorithms for computing elastic registration and warping maps. The proposal is timely, both because of interest surrounding the enormous applications of optimal transportation methods, and because of new geometric concepts issued from mass transportation theory. The aim for such meeting between the different subgroups in mass transportation theory is to increase their awareness of each other's work, and also to identify the most important unexplored problems and pave the way for collaborations.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184990525245667, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-applied-math/180097-ugly-fourier-transform.html
# Thread: 1. ## Ugly Fourier Transform Hi fellas, Can you me help to solve this Fourier Transform? $F[e^{(-\frac{(kt)^2}{2})}(kt)^2\ln{((kt)^2)}]$ I have an answer for it, but no explanation is available. I cannot understand the method or approximation which is made to find this answer. Answer: $\propto \min \left({\frac{1}{k},\frac{k^2|\ln{k/\omega}|}{\omega^3}}\right)$. Any idea, assumption or direction for solving this problem is highly appreciated! * $k$ is a real positive constant.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323130249977112, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/43171/why-can-beta-not-be-linearly-proportional-to-t-that-is-beta-constant
# Why can $\beta$ not be linearly proportional to $T$, that is $\beta = constant \times T$? $\beta$ in statistical mechanics is equal to $\frac{1}{k_BT}$ in in thermodynamics, but I do not understand why $\beta\propto T^{-1}$ instead of, say, $\beta\propto T$? - ## 4 Answers Because by convention, we want to write $\beta$ as the coefficient in front of energy $E$ in the exponent $\exp(-\beta E)$. Exponents have to be dimensionless so $\beta$ has to have units of inverse energy. That's why it has to be objects such as $\beta=1/kT$ because $kT$ has units of energy. The latter statement holds because the energy per degree of freedom increases with the temperature. At the end, that's the fundamental answer to your question. The exponential is $\exp(-E/kT)$ because $E\sim kT$ and "hot" means "highly energetic". It's linked to the fact that the temperature has an absolute lower bound, the absolute zero, much like energy is bounded from below. - Sometimes a definition is just a definition. The underlying math of statistical mechanics demands that we work with $E/kT$ quite a bit, and historically, we've chosen to work with $\beta \equiv 1/kT$. This can make some mathematical manipulations easier: for instance, $\frac{\partial}{\partial \beta} e^{\beta E} = E e^{\beta E}$ is easier to work with than if we explicitly left in a division (but there is "conservation of misery" here, as my advisor likes to say, where the nice things we get for doing this cost in converting from $\partial/\partial \beta$ to $\partial/\partial T$ down the road). You're entirely free to rewrite the whole of statistical mechanics in terms of some different parameter that is proportional to $T$. Call it $\tau$ if you like. $\tau = kT$. This can be an illustrative exercise for the inquisitive mind, seeing how it makes some expressions simpler and others more laborious. - $\beta$ is the fundamental quantity that appears in the laws of statistical mechanics, and comparison with the laws of thermodynamics then implies that $\beta$ is inversely proportional to the temperature. This can be seen from the first law, which says $dU=TdS-PdV$, while statistical mechanics provides at constant volume the formula $dS/k_B=\beta dU$. As a consequence, $\beta=1/k_BT$. Therefore a ''temperature'' defined to be proportional to $\beta$ would measure coldness rather than hotness. It is a historical accident that temperature was designed to measure hotness rather than coldness. The correspondence between statistical mechnaics and thermodynamics would be simpler if it were otherwise, but one cannot change history and the resulting tradition deeply rooted in our culture. - All so very true. One advantage of using $\beta$ as hinted at by Neumaier is that the absolute zero conundrum goes away. One can make $\beta$ as small as one wishes without having any problems. But you'd have to make $\beta$ infinite to get to "absolute zero". – Paul J. Gans Nov 2 '12 at 23:39 In statistical thermodynamics, when using the method of Lagrange multipliers, we obtain an expression as $$-\ln \rho = \alpha + \beta H$$ where $\alpha$ and $\beta$ are the multipliers to be determined. Multiplying by the Boltzmann constant and averaging we obtain the entropy $$\langle S \rangle = k_\mathrm{B}\alpha + k_\mathrm{B} \beta \langle H \rangle$$ Comparing with the thermodynamic entropy for a closed system at constant composition (Euler expression) $$S = S_0 + \frac{U}{T}$$ you obtain the value $\beta = 1/k_\mathrm{B}T$. You could try an alternative method by defining a Lagrange multiplier $\beta' = 1/\beta$, $$-\ln \rho = \alpha + H / \beta'$$ Repeating the above procedure you would obtain the value $\beta' = k_\mathrm{B}T$, but I do not find any advantage in this. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948468804359436, "perplexity_flag": "head"}
http://conservapedia.com/Expectation_(math)
# Expectation (math) ### From Conservapedia The mathematical expectation of a continuously distributed random variable X with probability density function f(x) is $\mbox{E}[X] =\int\limits_{-\infty}^\infty x f(x)dx.$ The expectation is also known as the mean of X. The expectation with respect to some function g(X) where X is distributed according to f(x) is $\mbox{E}[g(X)] =\int\limits_{-\infty}^\infty g(x) f(x)dx.$ For a discretely distributed random variable X with probability mass function pk it is | | | | |--------|----|-------| | E[X] = | ∑ | pkxk. | | | k | |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9028772115707397, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/87480/explanation-on-a-scheme-which-is-not-affine-scheme
## explanation on a scheme which is not affine scheme ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hartshorne at the end of page 76 of his Algebraic Geometry book gives an example of a scheme which is not an affine scheme. The scheme is constructed by gluing two affine lines together along their maximal ideals obtained by removing a point P. There's also a figure accompanying the example: ___________:_________ Can someone please explain how to show that this is not an affine scheme? - 4 One way to do this is to observe that an affine scheme is separated, but this scheme is not separated. – James Cranch Feb 3 2012 at 21:45 ## 3 Answers Call $X$ your scheme over the field $k$, $P_1$ and $P_2$ the two special closed points, $A_1$and $A_2$ their respective open complements and $A_{12}=A_1\cap A_2$, so that $A_i\simeq \mathbb A^1_k$ and $A_{12}\simeq\mathbb G_m$, all affine schemes. Here are some (not independent) proofs that $X$ is not affine. Proof 1 The point $(P_1,P_2)\in X \times X$ is in the closure of the diagonal $\Delta_X\subset X \times X$, but $(P_1,P_2)\notin \Delta_X$ . So $\Delta_X$ is not closed, hence $X$ is not separated and a fortiori not affine Proof 2 The images of the restriction map $\Gamma(A_i,\mathcal O_X)=k[T] \to \Gamma(A_{12},\mathcal O_X)=k[T,T^{-1}]$ are both $k[T]$, and together do not generate $k[T,T^{-1}]$. However, in an affine scheme (or more generally in a separated scheme) the ring of regular sections on the intersection of two open affines is generated by the images of the regular sections on the two opens. Proof 3 The two open immersions $\iota_j:\mathbb A^1_k \to X$ with respective image $A_j\subset X$ coincide on the open subscheme $\mathbb G_m\subset \mathbb A^1_k$ but are nevertheless distinct. This couldn't happen if $X$ were affine (or just separated). Proof 4 The cohomology vector space $H^1(X,\mathcal O_X)$ is infinite dimensional, whereas the cohomology of a coherent sheaf on an affine scheme vanishes in positive degree. In detail, consider the covering $\mathcal U=\lbrace A_1,A_2\rbrace$ of $X$. It is a Leray covering because $A_1,A_2,A_{12}$ are affine hence acyclic, for the coherent sheaf $\mathcal O_X$ (cf. Proof 2) . Thus Čech cohomology computes genuine cohomology. The Čech complex is the linear map $$C^0=\Gamma(A_1,\mathcal O_X)\times \Gamma(A_2,\mathcal O_X)=k[T]\times k[T]\stackrel {d^0}{\to} C^1=\Gamma(A_{12},\mathcal O^*_X)=k[T,T^{-1}]\to 0$$ given by $$d^0(P(T),Q(T)) =Q(T)-P(T)$$. Hence we get $H^1(X,\mathcal O_X)=k[T,T^{-1}]/k[T]$ Proof 5 The Čech complex above proves that $\Gamma(X,\mathcal O_X)=k[T]$ so that the restriction to the strictly smaller open affine subscheme $A_1\subsetneq X$ is bijective: $res: \Gamma(X,\mathcal O_X)\stackrel {\simeq}{\to} \Gamma(A_1,\mathcal O_X)$. This cannot happen for an affine scheme $X$. [In categorical language: $\Gamma$ is an anti-equivalence from the category of affine schemes to that of rings] Proof 6 Every global function $P(T)\in \Gamma(X,\mathcal O_X)=k[T]$ (see Proof 5) takes the exact same value at $P_1$ and $P_2$, namely $P(0)\in \kappa(P_1)=\kappa (P_2)=k$. In contrast given two closed points in an affine scheme , there exists a global regular function vanishing at the first one but not at the second. - 3 My friend Otto Forster once told me that he had seen in an old German book on group theory the sentence "Um den Satz zu erhärten, geben wir einen zweiten Beweis", which translates as: "To harden the theorem, we give a second proof". Logicians tell me that they are not convinced of the hardening. Pity, that – Georges Elencwajg Feb 4 2012 at 13:19 I'm impressed. Very nice answer. – Leo Alonso May 22 2012 at 17:33 Thanks for the kind words, @Leo. – Georges Elencwajg May 24 at 6:18 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Compute the ring $R$ of globally defined regular functions. If the scheme were affine, then there would be a bijective correspondence between closed points of the scheme and maximal ideals of $R$, given by taking a closed point to the ideal of functions that vanish at that point. But the two points represented by the colon in your diagram give the same ideal of $R$, so this "correspondence" is not injective. Therefore, the scheme is not affine. - Is that only because it's not Hausdorff? - 1 Many affine schemes are not Hausdorff. – Qiaochu Yuan Feb 3 2012 at 21:44 I think "Hausdorff" here means "separated". – Laurent Moret-Bailly Feb 4 2012 at 9:30 3 Dear Keivan,your question is a comment, not an answer. You should write comments in a box below your question or below someone else's answer ( as I just did here). The box will open if you click on the words "add comment". – Georges Elencwajg Feb 4 2012 at 10:07 Thanks Georges, I didn't know that. – Keivan Monfared Feb 5 2012 at 8:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9185464382171631, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/63747/differentiable-function?answertab=oldest
# Differentiable function? I'm studying the function: $$f(x,y) = \begin{cases} x+y & \text{if } x=0 \text{ or } y=0 \\ 1& \text{if } xy\neq 0 \end{cases}$$ The partial derivatives exist in $(0,0)$, and I can prove that directional derivative doesn't exist in $(0,0)$. Now, Is this function differentiable? Why? - Yes, if you study the directional derivative with the vector $(a,b)$, such that $a\neq 0$, and $b\neq 0$, do'nt exists. And you are right, but another form to justify? Thanks – Hiperion Sep 12 '11 at 0:09 ## 1 Answer No, the function is not differentiable. Here are two proofs of this: (1) If a function is differentiable at a point, then it is continuous there. Your function is not continuous at $(0,0)$, and therefore not differentiable there. (2) If a function is differentiable at a point, then all directional derivatives should exist. Your function does not have any directional derivatives in directions other than $\mathbf{e_1} = (1,0)$ and $\mathbf{e_2} = (0,1)$, so the function must not be differentiable. - Yes, you are right. Thank you very much! – Hiperion Sep 12 '11 at 0:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9169812798500061, "perplexity_flag": "head"}
http://mathoverflow.net/questions/64888/product-of-two-algebraic-subgroups-of-a-solvable-group-another-algebraic-subg/64970
## Product of two algebraic subgroups of a (solvable) group = another algebraic subgroup? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be a linear algebraic group over a field $K$. (Say $K=\mathbb{F}_q$ or $K=\mathbb{C}$; do not assume $K$ is algebraically closed or of characteristic $0$.) Let $H_1$, $H_2$ be algebraic subgroups of $G$. Consider the multiplication map $\phi:H_1\times H_2\to G$. The image of $\phi$ is a constructible set, i.e., a variety $H$ with perhaps a few varieties of lower dimension deleted from it. (This is a special case of a result of Chevalley's.) Question: when is $H_1(K) H_2(K)$ equal to $H(K)$? There are two issues here: closure (i.e., really getting a variety rather than a constructible set as the image) and rationality. Getting more specific, since the question above may be too hairy in general: (a) Assume that $G$ is solvable. Does that help? Can we then answer the question in the affirmative? (b) Say, furthermore, that both $H_1$ and $H_2$ are in the same unipotent subgroup of $G$, or that $H_1$ is unipotent and $H_2$ is a subgroup of a corresponding maximal torus. Does that help? - Apologies for the trivial comment, but... There's another group-theoretical issue, surely: in general you wouldn't expect the image to be a subgroup unless $H_1$ and $H_2$ are commuting subgroups. – James Cranch May 13 2011 at 11:34 You are right of course - I mistyped. Still, it would be interesting to know whether one can say anything more if one knows H_1 H_2 to be a group. – H A Helfgott May 13 2011 at 13:03 2 A pedantic remark. If $K$ is a finite field then, as finite groups are algebraic groups for trivial reasons, when your set is a group, it is automatically an algebraic group. It's probably not the group you want though. – Felipe Voloch May 13 2011 at 14:17 Yes, I should really change the phrasing of my question to ask what I wanted to ask (and thereby break Paul Ziegler's answer below...) – H A Helfgott May 13 2011 at 16:55 Harald: I think it's a bit dangerous to think of a constructible set as "a variety with perhaps a few varieties of lower dimension deleted from it". Take for example the affine plane and then remove the x-axis and then re-insert the origin. That's constructible, but in some sense near the origin it is very far from being a variety: there is no sensible notion of an algebraic function in a neighbourhood of the origin as far as I know. In particular it seems to me to be "worse" then a variety with some bits missing---it's a variety with a bit added in quite a strange way. – Kevin Buzzard May 14 2011 at 8:28 show 1 more comment ## 5 Answers The following is an answer to a previous version of the question, which asked whether there exists an algebraic subgroup $H$ of $G$ such that $H(K)=H_1(K)H_2(K)$: There are two necessary conditions on $H_1, H_2$: First, the set $\Gamma:=H_1(K)H_2(K)$ has to be a subgroup of $G(K)$. Also, since any algebraic subgroup of an algebraic group over a field is closed, the set $\Gamma$ has to closed in $G$. These conditions are also sufficient. This follows from from the following fact: Let $K$ be a field and $\Gamma$ a subgroup of $\operatorname{GL}_n(K)$ which is closed (for the Zariski toplogy on $\operatorname{GL}_n(K)$). Then there exists an algebraic subgroup $G$ of $\operatorname{GL}_n$ such that $G(K)=\Gamma$. This is (part of) Theorem 4.8 of these notes of Milne: http://www.jmilne.org/math/CourseNotes/aag.html - Nice, but I just broke your answer by changing the question. Sorry. – H A Helfgott May 13 2011 at 16:55 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is not always true. Example: Let $G = \mathbb{G}_m^2$ with $H_1$ the subtorus $\{ (x,x) \in \mathbb{G}_m^2 \}$ and $H_2$ the subtorus $\{ (y,y^{-1}) \in \mathbb{G}_m^2 \}$. Note that $H_1 H_2 = G$. However, given $(u,v) \in G(K)$, we can write $(u,v)$ as $(xy, x y^{-1})$ if and only if $uv$ is square. So, if there are nonsquare elements of $K$, then $H_1(K) H_2(K) \neq G(K)$. Here is a general sort of approach to these questions. Let $F = H_1 \cap H_2$. Let $Y = H_1 H_2$. (Note that these are intersections and products of varieties.) Let $m$ be the map $H_1 \times H_2 \to Y$. For $x \in Y(K)$, the fiber $m^{-1}(x)$ is a torsor for $F$; namely, let $F$ act on $m^{-1}(x)$ by $(h_1, h_2) \mapsto (h_1 f^{-1}, f h_2)$. So, if $H^1(\mathrm{Gal}(K), F)$ vanishes, then all torsors for $F$ over $K$ are trivial, and $m^{-1}(x)$ has a point, which is what you wanted. For example, if the ambient group $G$ is unipotent, and we are in characteristic zero, then $F$ will automatically be unipotent. In particular, $F$ will have a filtration by $\mathbb{G}_a$'s and, as $H^1(\mathrm{Gal}(K), \mathbb{G}_a)=0$, we see that the statement is true in this case. - What if $G$ is unipotent but we are not in characteristic zero? – H A Helfgott May 14 2011 at 13:37 Also, wait - what happens if $G$ is not unipotent but $F=\{e\}$? Would we automatically get that $H_1(K)H_2(K) = H(K)$, where $H$ is the Zariski closure of $H_1 H_2$? – H A Helfgott May 14 2011 at 14:02 Let $G = \mathbb{G}_a^2$ over a field of characteristic $p$. Let $H_1 = \{ (x,x) \}$ and $H_2 = \{ (y,y^p) \}$. Then $H_1 H_2$ is all of $G$. The point $(0,t)$ is in $H_1(K) H_2(K)$ if and only if $t$ is of the form $x^p-x$. So, if $K$ has nontrivial Artin-Schrier extensions, the result does not hold in this case. – David Speyer May 14 2011 at 14:04 Indeed, I am arguing that if $H_1 \cap H_2 = \{ e \}$ (scheme theoretically) then $(H_1 H_2)(K) = H_1(K) H_2(K)$. – David Speyer May 14 2011 at 14:05 So it is not enough to assume that $(H_1\cap H_2)(K) = \{e\}$, but it is enough to assume that $(H_1\cap H_2)(\overline{K}) = \{e\}$? (Is the latter statement exactly what you meant by "scheme-theoretically" in this context?) (Note: don't assume the ambient group is unipotent or solvable.) – H A Helfgott May 15 2011 at 9:09 show 3 more comments Maybe it's worth pointing out that the question contained in the header has an obvious negative answer (and is not the main question being asked). The easiest counterexample would be the product of the two one-dimensional unipotent subgroups in the $3 \times 3$ upper triangular unipotent group which correspond to simple roots for the special linear group. This is a closed set (of dimension 2) but not a subgroup. On the other hand, the product of all three positive root groups in this situation is the entire upper triangular unipotent group. But here as in many natural solvable groups you are building a group step-by-step as product of a normal subgroup and another group. - The following is a partial answer to the question whether $H_1 H_2$ is closed in $G$: This is not true in general. For example, if $G$ is reductive, $T$ a maximal torus of $G$ and $B$ a Borel subgroup containing $T$, in the Bruhat decomposition of $G$ into double cosets $BwB$ for $w$ in the Weyl group with respect to $T$, for all but one $w$ the cell $BwB$ is not closed in $G$. For such a $w$ the product of the subgroups $B$ and $wBw^{-1}$ is not closed. However, if $H_1$ and $H_2$ are unipotent, then $H_1 H_2$ is closed. This follows from the fact that any orbit under the action of a unipotent group on an affine variety is closed. This includes the first part of your (b). $H_1 H_2$ is also closed in the situation of the second part of (b): Let $H_1$ be unipotent and $H_2$ a subgroup of a maximal torus $T$ which normalizes $H_1$ (I assume that's what you mean by corresponding maximal torus.) Since $H_1 H_2$ is a disjoint union of translates of $H_1^0 H_2$ we may assume that $H_1$ is connected. Then there exists a Borel subgroup $B$ of $G$ containing $T$ and $H_1$. Let $U$ be the unipotent radical of $B$. The product morphism $U\times T\to B$ is an isomorphism of varieties and under this isomorphism, $H_1 H_2$ corresponds to $H_1\times H_2$. Thus $H_1H_2$ is closed in $B$ and hence in $G$. - Hi Paul - Thanks for your answer. This takes care of closure, but what about rationality? See the exchange between David Speyer and myself above. – H A Helfgott May 16 2011 at 13:35 If your subgroups are ${exp(At)}_t$ and ${exp(Bt)}_t$ and $[A,B]$, $A$, $B$ are linearly independent then your statement is false. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 136, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485908150672913, "perplexity_flag": "head"}
http://gottwurfelt.wordpress.com/2012/07/21/fibonacci-subset-fun/
A mathematician blogs. # Fibonacci subset fun A week ago I wrote a post on bitwise set trickery in which I asked a question from James Tanton: how many subsets S of {1, 2, …, 10} have the property that S and S+2 cover all the numbers from 1 to 12? To solve this is a one-liner in R. Slightly generalizing, we can replace 10 by n: ``` library(bitops) g = function(n){sum(bitOr(0:(2^n-1), 4*(0:(2^n-1))) == (2^(n+2)-1))} ``` Then it’s easy to compute g(n) for n = 1, 2, …, 20: | | | | | | | | | | | | |------|----|----|-----|-----|-----|-----|-----|------|------|------| | n | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | | g(n) | 0 | 1 | 1 | 1 | 2 | 4 | 6 | 9 | 15 | 25 | | n | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | | g(n) | 40 | 64 | 104 | 169 | 273 | 441 | 714 | 1156 | 1870 | 3025 | and if positive integers are at least your casual acquaintances you’ll recognize a lot of squares here, and in particular a lot of squares of Fibonacci numbers: $\cdots 9 = 3^2, 25 = 5^2, 64 = 8^2, 169 = 13^2, 441 = 21^2, \cdots$ The numbers in between the squares are a little trickier, but once we’re primed to think Fibonacci it’s not hard to see $\cdots 15 = 3 \times 5, 40 = 5 \times 8, 104 = 8 \times 13, 273 = 13 \times 21, \cdots$ So this leads to the conjecture that $g(2n) = F_n^2, g(2n+1) = F_n F_{n+1}$ for positive integers \$n\$. If you’re allergic to cases you can write this as $g(n) = F_{\lfloor n/2 \rfloor} F_{\lceil n/2 \rceil}.$ So how to prove these formulas? We can explicitly list, say, the g(8) = 9 sets that we need. ``` sets = function(n){ indices = which(bitOr(0:(2^n-1), 4*(0:(2^n-1))) == (2^(n+2)-1))-1; #like the bit trickery above for (i in 1:length(indices)) { print(which(intToBits(indices[i]) == 01)) #convert from integer to vector of its 1s } } ``` (The “-1″ is because lists in R are 1-indexed.) Then, for example, sets(9) outputs the fifteen sets ``` 1 2 4 5 8 9 1 2 3 4 5 8 9 1 2 5 6 8 9 1 2 3 5 6 8 9 1 2 4 5 6 8 9 1 2 3 4 5 6 8 9 1 2 3 4 7 8 9 1 2 4 5 7 8 9 1 2 3 4 5 7 8 9 1 2 3 6 7 8 9 1 2 3 4 6 7 8 9 1 2 5 6 7 8 9 1 2 3 5 6 7 8 9 1 2 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 ``` and now we examine the entrails. Each row consists of a subset S of {1, … 9} such that, when we take its union with S + 2, we get all the integers from 1 up to 11. Now when we add 2 to every element of S, we don’t change parity, so it makes sense to look at even and odd numbers separately. If we extract just the even numbers from each of the sets, we get {2, 4, 8}, {2, 6, 8}, or {2, 4, 6, 8}, each of which occurs five times; if we extract just the odd numbers we get {1, 5, 9}, {1, 3, 5, 9}, {1, 3, 7, 9}, {1, 5, 7, 9} or {1, 3, 5, 7, 9}, each of which occurs three times. Every possible combination of one of the even subsets with one of the odd subsets occurs exactly once! What to make of this? Well, in order for $S \subset \{1, 2, \ldots, 9\}$ to satisfy $\{ 2, 4, 6, 8, 10 \} \subset (S \cup (S+2))$, we must have that S contains 2; at least one of 2 or 4; at least one of 4 or 6; at least one of 6 or 8; and 8. Similarly, looking at just the odd numbers, S must contain 1; at least one of 1 or 3; at least one of 3 or 5; at least one of 5 or 7; at least one of 7 or 9; and 9. And $S$ will satisfy the overall condition if and only if its even part and its odd part do what they need to do; there’s no interaction between them. So now what? Consider the set in bold above; call it S. Its even part is {2, 4, 8} and its odd part is {1, 5, 7, 9}. We can take the even part and divide everything by 2 to obtain a set T = {1, 2, 4}; we can take the odd part, divide through by 2, and round up to get U = {1, 3, 4, 5}. The conditions transform similarly; in order for $S \subset \{ 1, 2, \ldots, 9\}$ to satisfy $\{ 1, \ldots 11 \} = (S \cup (S+2))$, we have to have that: • the set T, obtained from S by taking its even subset and dividing through by 2, contains 1; at least one of 1 or 2; at least one of 2 or 3; at least one of 3 or 4; and 4; • the set U, obtained from S by taking its odd subset and dividing through by 2, contains 1; at least one of 1 or 2; at least one of 2 or 3; at least one of 3 or 4; at least one of 4 or 5; and 5. Of course it’s easier to say that T is a subset of {1, 2, 3, 4} containing 1, 4, and at least one of every two consecutive elements, and U is a similar subset of {1, 2, 3, 4, 5}. So how many subsets of {1, 2, 3, …, n} contain 1, n, and at least one of every two consecutive elements? This is finally where the Fibonacci numbers come in. It’s easier to count possible complements of T. If T satisfies the condition given above, then its complement with respect to {1, 2, …, n} is a subset of {2, 3, …, n-1} containing no two consecutive elements. Call the number of such sets f(n). For n = 3 there are two such subsets of {2}, namely the empty set and {2} itself; for n = 4 there are three such subsets of {2, 3}, namely the empty set, {2}, and {3}. So f(3) = 2, f(4) = 3. Now to find f(n) if n > 4. This is the number of subsets of {2, 3, …, n-1} which contain no two consecutive elements. These can be divided into two types: those which contain n-1 and those which don’t. Those which contain n-1 can’t contain n-2, and so are just subsets of {2, 3, …, n-3} which contain no two consecutive elements, with n-1 added. There are f(n-2) of these. Those which don’t contain n-1 are just subsets of {2, 3, …, n-2} with no two consecutive elements; there are f(n-1) of these. So f(n) = f(n-1) + f(n-2); combined with the initial conditions we see that f(n) is just the nth Fibonacci number. So we can finally compute g(n). For a set S to satisfy the condition defining g(n) — that is, to have $S \subset {1, 2, \ldots, n}$ and \$S \cup (S+2) = {1, 2, \ldots, n+2}\$ — we have to have that the corresponding $T^c$ is a subset of ${1, 2, \ldots, \lfloor n/2 \rfloor}$ containing no two consecutive elements, and the corresponding $U^c$ is a subset of ${1, 2, \ldots, \lceil n/2 \rceil}$ containing no two consecutive elements. The number of ways to do this is exactly $F_{\lfloor n/2 \rfloor} F_{\lceil n/2 \rceil}$, which is what we wanted. I’m looking for a job, in the SF Bay Area. See my linkedin profile.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432320594787598, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/30123-i-cant-get-help.html
# Thread: 1. ## I cant Get this... HELP>>> How many distinct Permutations of the letters of the word OTTAWA begin and end with the letter T? 2. Originally Posted by wikji How many distinct Permutations of the letters of the word OTTAWA begin and end with the letter T? The number of permutations of a set with n elements is n! since it needs to begin and end with t they can't move so we have 4!, but we want distinct permutations so we need to get rid of the double count of a's so we end up with $\frac{4!}{2!}=\frac{24}{2}=12$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9191015362739563, "perplexity_flag": "head"}
http://cms.math.ca/Reunions/hiver11/abs/fm
Réunion d'hiver SMC 2011 Delta Chelsea Hotel, Toronto, 10 - 12 décembre 2011 www.smc.math.ca//Reunions/hiver11 Mathématiques financiers Org: Matt Davison (Western), Marcus Escobar (Ryerson), Sebastian Ferrando (Ryerson), Pablo Olivares (Ryerson) et Luis Seco (Toronto) [PDF] ALEXANDER ALVAREZ, Ryerson University Local continuity of stopping times and arbitrage  [PDF] In this work we extend some of the results in [Bender, Sotinnen and Valkeila(08)] to prove the absence of arbitrage in markets driven by non semimartingale models. To this end, we restric the portfolio strategies to those that depend on locally continuous stopping times relative to a metric structure in the trajectory space. Technically we rely on a non-probabilistic Ito's formula for functions with finite quadratic variation. We discuss some implications of our results and prove absence of arbitrage for non semimartingale models having jumps or stochastic volatility. For the analized examples we prove the corresponding small balls properties and the local continuity of the porfolio value under different metrics. ALEX BADESCU, University of Calgary Hedging GARCH options with generalized innovations  [PDF] In this paper, we study the performance of different hedging schemes when the asset return process is modelled by a general class of GARCH models. Since the minimal martingale measure fails to produce a probability measure in this setting, we construct local risk minimization (lrm) hedging strategies with respect to a risk neutral measure. Using the conditional Esscher transform and the extended Girsanov principle as our martingale measure candidates, we construct lrm delta hedges based on different distributional assumptions regarding the GARCH innovations. An extensive numerical experiment is conducted to compare these hedges to the standard stochastic volatility delta hedges for different European style option maturities and hedging frequencies. GERMAN BERNHART, TU München Numerical density calculation for distributions of the Bondesson class  [PDF] We address the numerical density calculation via Laplace inversion for distributions of the Bondesson class. The classical Bromwich inversion integral involves serious computational challenges such as highly oscillating integrands and infinite integration bounds. It is proven that a certain contour transformation is admissible for the considered class of distributions, yielding a rapidly declining integrand and allowing for a substitution to a finite interval. The approach is tested for distributions with known density (Gamma distribution, IG distribution) and compared to other approaches for unknown densities (alpha-stable distribution). Analogous procedures can be applied for the efficient numerical pricing of CDO contracts in specific CIID models. The talk is based on the paper “Numerical density calculation for distributions of the Bondesson class” by G. Bernhart, J.-F. Mai, S. Schenk, and M. Scherer. JOE CAMPOLIETI, Wilfrid Laurier University Dual Stochastic Transformations of Solvable Diffusions  [PDF] We present new extensions to a method for constructing several families of solvable one-dimensional time-homogeneous diffusions. Our approach is based on a dual application of the so-called diffusion canonical transformation method that combines smooth monotonic mappings and measure changes via Doob-h transforms. This gives rise to new multi-parameter solvable diffusions that are generally divided into two main classes; the first is specified by having affine (linear) drift with various resulting nonlinear diffusion coefficient functions, while the second class allows for several specifications of a (generally nonlinear) diffusion coefficient with resulting nonlinear drift function. The theory is applicable to diffusions with either singular and/or non-singular endpoints. As part of the results in this paper, we also present a complete boundary classification and martingale characterization of the newly developed diffusion families. The first class of models, having linear drift and nonlinear (state-dependent) volatility functions, is useful for equity derivative pricing in finance, while the second class of diffusions contains new models that are mean-reverting and which are applicable in areas such as interest rate modeling. As specific examples of the first class of affine drift models, we present explicit results for three new families of models. For the second class of nonlinear drift models, we give examples of solvable subfamilies of mean-reverting diffusions and derive some closed-form integral formulas for conditional expectations of functionals that can be used to price bonds and bond options. BARBARA GOETZ, TU München Valuation of multi-dimensional derivatives in a stochastic correlation framework  [PDF] Stochastic volatility models have been in place for some years now. A natural extension of the latter ones is a multivariate model with stochastic correlation. And indeed, the performance of a portfolio or a multi-dimensional derivative depends very much on the joint behaviour of the underlyings, i.e. the covariances, which are not constant over time. However, one of the main problems with the modelling of correlation is intractability because the number of parameters grows quite fast as dimensions increase. The model treated here is based on a stochastic principal component model, which reduces the dimension of the original problem. We reduce complexity by modelling the eigenvalues of the assets instead of the full covariance matrix. We set the eigenvectors constant but assume the eigenvalues stochastic. An empirical analysis shows that the eigenvalues are driven by two mean-reverting components, one which varies in the order of days and the other one which varies in the order of months. Our approach allows a multi-dimensional extension of the Heston model with stochastic volatility, stochastic correlation among assets, between variances and assets as well as between assets and correlation. The proposed model is applied to price end-point as well as path-dependent two-asset options. A closed-form solution for barrier options under stochastic correlation has not been found. Thus, we show how perturbation theory can be used to find easy and well converging approximations to non-vanilla options on two correlated underlyings. MATHEUS GRASSELLI, McMaster University An agent-based computational model for bank formation and interbank networks  [PDF] We introduce a simple framework where banks emerge as a response to a natural need in a society of individuals with heterogeneous liquidity preferences. We examine bank failures and the conditions for an interbank market is to be established. We start with an economy consisting of a group of individuals arranged in a 2-dimensional cellular automaton and two types of assets available for investment. Because of uncertainty, individuals might change their investing preferences and accordingly seek their surroundings neighbours as trading partners to satisfy their new preferences. We demonstrate that the individual uncertainty regarding preference shocks coupled with the possibility of not finding a suitable trading partners when needed give rise to banks as liquidity providers. Using a simple learning process, individuals decide whether or not to join the banks, and through a feedback mechanism we illustrate how banks get established in the society. We then show how the same uncertainty in individual investing preferences that gave rise to banks also causes bank failures. In the second level of our analysis, in a similar fashion, banks are treated as agents and use their own learning process to avoid failures and create an interbank market. In addition to providing a bottom up model for the formation of banks and interbank markets, our model allows us to address under what conditions bank oligopolies and frequent banks failures are to be observed, and when an interbank market leads to a more stable system with fewer failures and less concentrated market players. TOM HURD, McMaster University Analyzing contagion in banking networks  [PDF] I introduce a class of stylized banking networks and try to predict the size and frequency of contagion events. I find that the domino effect can be understood as an explicit iterated mapping on a set of edge probabilities that converges to a fixed point. A cascade condition is derived that characterizes whether or not an infinitesimal shock to the network can grow to a finite size cascade, in analogy to the basic reproduction number $R_0$ in epidemic modeling. It provides an easily computed measure of the systemic risk inherent in a given banking network topology. An analytic formula is given for the frequency of global cascades, derived from percolation theory on the random network. Two simple examples illustrate that edge assortativity can have a strong effect on the level of systemic risk as measured by the cascade condition. Although the analytical methods are derived for infinite networks, large-scale Monte Carlo simulations demonstrate the applicability of the results to finite-sized networks. Finally, we'll see that a simple graph theoretic quantity, graph assortativity, seems to best capture systemic risk. CODY HYNDMAN, Concordia University Generalized filter-based EM algorithm and applications to calibration  [PDF] The Kalman filter has been applied to a wide variety of financial models where the underlying stochastic processes driving a price are unobservable directly. Maximum likelihood parameter estimation for these models is challenging due to the recursive nature of the Kalman filter as well as the complicated interdependence of the signal and observation equations on multiple parameters. An alternative to direct numerical maximization of the likelihood function is the Expectation Maximization (EM) algorithm producing a sequence of parameter estimates involving two steps at each iteration: the Expectation step (E-step) and Maximization step (M-step). The filter-based approach developed in Elliott and Hyndman [J. Econom. Dynam. Control 31 (2007), no. 7, 2350–2373] requires only a forward pass through the data and is therefore potentially twice as fast as the smoother-based algorithm. The filter-based algorithm is expressed in terms of decoupled filters that can be computed independently in parallel on a multiprocessor system allowing for further gains in efficiency. In this paper we derive new finite-dimensional filters which allow the EM algorithm to be implemented for certain multi-factor commodity price models, generalizing the results of Elliott and Hyndman [op. cit.]. In the cases under consideration the solution to the M-step does not exist in closed form. However, it is possible to approximately solve the M-step by applying one-iteration of Newton's method to the high degree polynomials characterizing some of the updated parameters resulting in a Generalized EM algorithm. The method is illustrated by application to a two-dimensional commodity price model. SEBASTIAN JAIMUNGAL, University of Toronto Buy Low, Sell High: A High Frequency Trading Perspective  [PDF] In this I will present a class of self-exciting processes as a promising approach to modeling trading activity at high frequencies. Our model neatly accounts for the clustering of intensity of trades and the feedback effect which trading induces on both market orders as well as the shape of the limit order book (LOB). Further, it allows for efficient calibration to market data based on pseudo-likelihood methods. As well, various probabilistic quantities of interest such as the probability that the next market order is a buy or sell, the distribution of the time of arrival of a buy or sell order, and the probability that the mid-price moves a given amount before a market order arrives are also easily computable. Finally, we study an optimal control problem for a trader who places immediate-or-cancel limit buy-and-sell orders to take advantage of the bid-ask spread. Asymptotic expansions in the level of risk-aversion lead to closed form and intuitive results which are also adapted to the state of the market. Some numerical experiments will be used to demonstrate the utility of the model and optimal strategies. [This is joint work with Alvaro Cartea, U. Carlos III de Madrid and Jason Ricci, U. Toronto] R KULPERGER, University of Western Ontario Multivariate GARCH  [PDF] Multivariate GARCH models are an interesting time series model in finance. We discuss issues about the estimation, consistency and asymptotic normality of the estimators. These models also have many parameters, so some form of parameter reduction is needed. LASSO is a method that is useful for parameter reduction in linear regression. While this work is preliminary we are studying the use of LASSO type ideas in time series and multivariate GARCH. ALEXEY KUZNETSOV, York University Cool Math behind Asian options  [PDF] Since Asian options were first introduced in Tokyo in 1987, there have appeared almost 3600 research papers related to these financial derivatives (according to Google Scholar web search). One may wonder, what makes these particular options so attractive to researchers in Mathematical Finance? We think that one of the reasons is that there is a lot of beautiful Mathematics related to pricing Asian options. The goal of this talk is to discuss some of the mathematical theories involved in pricing Asian options, both in the classical Black-Scholes setting and in the more general case of Levy driven models. In particular, we will discuss the connections with self-similar Markov processes and the Lamperti transformation, the recent results of N.Cai and S.Kou on Asian options for processes with hyper-exponential jumps, and our recent results on processes with jumps of rational transform. ROMAN MAKAROV, Wilfrid Laurier University Pricing occupation-time derivatives  [PDF] New simulation algorithms and analytical methods for pricing occupation-time derivatives under jump-diffusion processes and solvable nonlinear diffusion models are developed. A new efficient method for exact sampling from the distribution function of occupation times of a Brownian bridge is proposed. The method is applied to the exact pricing of continuously-monitored occupation-time derivatives under the double-exponential jump-diffusion process. In Monte Carlo methods for nonlinear solvable diffusion models, the occupation time is estimated using the Brownian bridge interpolation. In the second part of this talk, we consider a special family of occupation-time derivatives namely proportional step options introduced by Linetsky in [Math. Finance, 9, pp. 55-96 (1999)]. We develop new spectral expansion methods for pricing such options. Our approach is based on the application of the Feynman-Kac formula and the residue theorem. As an underlying asset price process we consider a solvable nonlinear diffusion model such as the constant elasticity of variance (CEV) diffusion model and state-dependent-volatility confluent hypergeometric diffusion processes. This is joint work with Joe Campolieti and Karl Wouterloot. ROGEMAR MAMON, University of Western Ontario Weak HMM and its application to asset price modelling  [PDF] A higher-order hidden Markov model (HMM) is considered in modelling the price dynamics of a risky asset. The log returns of asset prices are governed by a higher-order or the so-called weak Markov chain in a finite-state space. The optimal estimates of the second-order Markov chain and model parameters are derived. This is done via a transformation that converts the second-order HMM into the usual HMM. The model is implemented to a dataset of financial time series and its forecasting performance investigated. An extension of the parameter estimation framework is developed to handle multivariate time series data. The use of higher-order HMM captures both the regime-switching behaviour and long-range dependence in the financial data. (This is a joint work with X. Xi, Dept of Applied Mathematics, University of Western Ontario.) ADAM METZLER, University of Western Ontario Approximating American Option Prices via Sub-Optimal Exercize Strategies  [PDF] In this talk we investigate the approximate pricing of American put options by optimizing over sub-optimal excercize strategies. Strategies are taken to be hitting times of the stock price (geoemtric Brownian motion) to smooth curves, and all curves considered are drawn from parametric families which admit closed-form first-passage time distributions. This allows one to express option values as (very well-behaved) one-dimensional integrals which are easily evaluated numerically. Despite the apparent simplicity of the method it appears to be remarkably accurate, providing an extremely rapid lower bound on the option value. The talk is based on the M.Sc. thesis of W. Xing. DAVE SAUNDERS, University of Waterloo Mathematics of Credit Risk Capital in the Trading Book  [PDF] As part of the regulatory response to the global financial crisis, the Basel Committee on Banking Supervision has revised its rules for determining regulatory capital for credit risk in a bank's trading book. I will discuss mathematical problems related to the calculation of capital under the new regulations. ANATOLIY SWISHCHUK, University of Calgary Variance Swap for Local L\'{e}vy based Stochastic Volatility with Delay  [PDF] The valuation of the variance swaps for local Levy based stochastic volatility with delay (LLBSVD) is discussed in this talk. We provide some analytical closed forms for the expectation of the realized variance for the LLBSVD. As applications of our analytical solutions, we fit our model to 10 years of S\&P500 data (2000-01-01--2009-12-31) with variance gamma model and apply the obtained analytical solutions to price the variance swap. This is a joint talk with K. Malenfant. TONY WARE, University of Calgary Splitting methods in computational finance  [PDF] Operator splitting methods form a staple part of our arsenal of approaches to the numerical solution of PDEs. They work by a `divide and conquer' approach, reducing a complex problem to a sequence of simpler problems, which confers advantages when it comes to designing, coding and analyzing algorithms. We discuss some uses of operator splitting methods for certain types of Hamilton-Jacobi-Bellman equations arising in finance. We will also illustrate how operator splitting can be used to extend the applicability of existing methods to more complex settings; for example, we show how Fourier methods can be applied to option valuation problems with non-constant coefficients or in high dimensional settings. ## Commandites Nous remercions chaleureusement ces commanditaires de leur soutien.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8866900205612183, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/147935/calculate-the-mass-of-the-ball-x2y2z2-1-with-point-density-exeyez?answertab=oldest
# Calculate the mass of the ball $x^2+y^2+z^2=1$ with point density $e^x+e^y+e^z$ I'm asked in an exercise to calculate the mass of the ball $x^2+y^2+z^2=1$ with a density of $e^x+e^y+e^z$ at a given point. We've only learned triple integration with Cartesian coordinates so far so I'm trying to set up a triple integral using those. But I get sort of stuck in figuring out how I want to set up the integral. My first thought was, I should have one coordinate, say z, go from $-1$ to $1$, y from $-\sqrt{1-z^2}$ to $\sqrt{1-z^2}$ and x from $-\sqrt{1-y^2-z^2}$ to $\sqrt{1-y^2-z^2}$. But the resulting integral turned out to be hard to calculate and the answer seems wrong. Any tips would be appreciated :). Thanks! - I think you mean $\le1$ rather than $=1$, if you indeed want a ball. – anon May 21 '12 at 22:09 Haven't you learned yet how to transform an integral from one coordinate system to another using the Jacobian of the transformation? – Américo Tavares May 21 '12 at 22:10 That's /probably/ what I meant, but the exercise itself actually types it as $x^2+y^2+z^2=1$ for whichever reason. I'm guessing it's a typo since they're still calling it a ball though. – ro44 May 21 '12 at 22:11 @AméricoTavares: Nope, not yet. We've only just started studying triple integration. – ro44 May 21 '12 at 22:11 ## 2 Answers One way to simplify matters is to note that integration is linear and that the region is symmetric under permutations of $x$, $y$, $z$, so the answer will be $3$ times the integral of $e^x$ over the ball. Slice the ball at a given $x$ and you get a disk of radius $\sqrt{1-x^2}$. So this reduces the computation to $$3 \int_{-1}^1 \pi (1-x^2) e^{x}\ dx$$ - Thanks for your assistance! – ro44 May 21 '12 at 22:22 $$M = \int_{V} (\exp(x) + \exp(y) + \exp(z)) dx dy dz = 3 \int_V \exp(x) dxdydz$$ $$\int_V \exp(x) dxdydz = \int_{x=-1}^{x=1} \int_{y=-r(x)}^{y = r(x)} \int_{z=-\sqrt{r(x)^2-y^2}}^{z = \sqrt{r(x)^2 - y^2}} \exp(x) dz dy dx$$ where $r(x) = \sqrt{1-x^2}$ $$\int_V \exp(x) dxdydz = \int_{x=-1}^{x=1} \int_{y=-r(x)}^{y = r(x)} 2 \sqrt{r(x)^2-y^2} \exp(x) dy dx = \int_{x=-1}^{x=1}\exp(x) \pi r(x)^2 dx\\ = \pi\int_{x=-1}^{x=1}\exp(x) (1-x^2) dx = \frac{4 \pi}{e}$$ Hence, the mass is $$\dfrac{12 \pi}{e}$$ - Thanks for your assistance! – ro44 May 21 '12 at 22:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381691813468933, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2011/09/16/oriented-manifolds-with-boundary/?like=1&_wpnonce=d19428bcb3
# The Unapologetic Mathematician ## Oriented Manifolds with Boundary Let’s take a manifold with boundary $M$ and give it an orientation. In particular, for each $p\in M$ we can classify any ordered basis as either positive or negative with respect to our orientation. It turns out that this gives rise to an orientation on the boundary $\partial M$. Now, if $p\in\partial M$ is a boundary point, we’ve seen that we can define the tangent space $\mathcal{T}_pM$, which contains — as an $n-1$-dimensional subspace — $\mathcal{T}_p(\partial M)$. This subspace cuts the tangent space into two pieces, which we can distinguish as follows: if $(U,x)$ is a coordinate patch around $p$ with $x(p)=0$, then the image of $\partial M$ near $p$ is a chunk of the hyperplane $x^n=0$. The inside of $M$ corresponds to the area where $x^n>0$, while the outside corresponds to $x^n>0$. And so the map $x_{*p}$ sends a vector $v\in\mathcal{T}_pM$ to a vector in $\mathcal{T}_0\mathbb{R}^n$, which either points up into the positive half-space, along the hyperplane, or down into the negative half-space. Accordingly, we say that $v$ is “inward-pointing” if $x_{*p}(v)$ lands in the first category, and “outward-pointing” if if lands in the last. We can tell the difference by measuring the $n$th component — the value $\left[x_{*p}(v)\right](u^n)=v(u^n\circ x)=v(x^n)$. If this value is positive the vector is inward-pointing, while if it’s positive the vector is outward-pointing. This definition may seem to depend on our choice of coordinate patch, but the division of $\mathcal{T}_pM$ into halves is entirely geometric. The only role the coordinate map plays is in giving a handy test to see which half a vector lies in. Now we are in a position to give an orientation to the boundary $\partial M$, which we do by specifying which bases of $\partial M$ are “positively oriented” and which are “negatively oriented”. Specifically, if $v_1,\dots,v_{n-1}$ is a basis of $\mathcal{T}_p(\partial M)\subseteq\mathcal{T}_pM$\$ then we say it’s positively oriented if for any outward-pointing $v\in\mathcal{T}_pM$ the basis $v,v_1,\dots,v_{n-1}$ is positively oriented as a basis of $\mathcal{T}_pM$, and similarly for negatively oriented bases. We must check that this choice does define an orientation on $\partial M$. Specifically, if $(V,y)$ is another coordinate patch with $y(p)=0$, then we can set up the same definitions and come up with an orientation on each point of $V\cap\partial M$. If $U$ and $V$ are compatibly oriented, then $U\cap\partial M$ and $V\cap\partial M$ must be compatible as well. So we assume that the Jacobian of $y\circ x^{-1}$ is everywhere positive on $U\cap V$. That is $\displaystyle\det\left(\frac{\partial(y^i\circ x^{-1})}{\partial u^j}\right)>0$ We can break down $x$ and $y$ to strip off their last components. That is, we write $x(q)=(\tilde{x}(q),x^n(q))$, and similarly for $y$. The important thing here is that when we restrict to the boundary $U\cap\partial M$ the $\tilde{x}$ work as a coordinate map, as do the $\tilde{y}$. So if we set $u^n=0$ and vary any of the other $u^j$, the result of $y^n(x^{-1}(u))$ remains at zero. And thus we can expand the determinant above: $\displaystyle\begin{aligned}0&<\det\left(\frac{\partial(y^i\circ x^{-1})}{\partial u^j}\bigg\vert_{u^n=0}\right)\\&=\det\begin{pmatrix}\displaystyle\frac{\partial(y^1\circ x^{-1})}{\partial u^1}\bigg\vert_{u^n=0}&\cdots&\displaystyle\frac{\partial(y^1\circ x^{-1})}{\partial u^{n-1}}\bigg\vert_{u^n=0}&\displaystyle\frac{\partial(y^1\circ x^{-1})}{\partial u^n}\bigg\vert_{u^n=0}\\\vdots&\ddots&\vdots&\vdots\\\displaystyle\frac{\partial(y^{n-1}\circ x^{-1})}{\partial u^1}\bigg\vert_{u^n=0}&\cdots&\displaystyle\frac{\partial(y^{n-1}\circ x^{-1})}{\partial u^{n-1}}\bigg\vert_{u^n=0}&\displaystyle\frac{\partial(y^{n-1}\circ x^{-1})}{\partial u^n}\bigg\vert_{u^n=0}\\{0}&\cdots&0&\displaystyle\frac{\partial(y^n\circ x^{-1})}{\partial u^n}\bigg\vert_{u^n=0}\end{pmatrix}\end{aligned}$ The determinant is therefore the determinant of the upper-left $n\times n$ submatrix — which is the Jacobian determinant of the transition function $\tilde{y}\circ\tilde{x}^{-1}$ on the intersection $(U\cap\partial M)\cap(V\cap\partial M$ — times the value in the lower right. If the orientations induced by those on $U$ and $V$ are to be compatible, then this Jacobian determinant on the boundary must be everywhere positive. Since the overall determinant is everywhere positive, this is equivalent to the lower-right component being everywhere positive on the boundary. That is: $\displaystyle\frac{\partial(y^n\circ x^{-1})}{\partial u^n}\bigg\vert_{u^n=0}$ But this asks how the $n$th component of $y$ changes as the $n$th component of $x$ increases; as we move away from the boundary. But, at least where we start on the boundary, $y^n$ can do nothing but increase! And thus this partial derivative must be positive, which proves our assertion. ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 1 Comment » 1. [...] say that the surface is the boundary of some -dimensional submanifold of , and that it’s outward-oriented. That is, we can write . Then our hypersurface integral looks [...] Pingback by | November 22, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 64, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9091255068778992, "perplexity_flag": "head"}
http://mathoverflow.net/questions/110147/series-for-envelope-of-triangle-area-bisectors
## Series for envelope of triangle area bisectors ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The lines which bisect the area of a triangle form an envelope as shown in this picture It is not difficult to show that the ratio of the area of the red deltoid to the area of the triangle is $$\frac{3}{4} \log_e(2) - \frac{1}{2} \approx 0.01986.$$ But this is also $$\sum_{n=1}^{\infty}\frac{1}{(4n-1)(4n)(4n+1)}.$$ Is there any connection between the series and the deltoid? Or is it just a coincidence? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484897255897522, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/1250/how-do-i-estimate-convergence-in-monte-carlo-methods
# How do I estimate convergence in monte carlo methods? I am experimenting with Monte Carlo methods. I'd like to measure/estimate convergence with a graph/chart. How do I do that? Can anyone please direct me to relevant documentation/links or even give me tips or general guidelines? Thanks in advance, Julien. - Convergence of what? Can you please be more specific. Also what applications/tools do you use to experiment with Monte Carlo? – Dmitrii I. May 31 '11 at 10:35 Hello. I mean convergence of my results. I use Java in order to evaluate a European Option. So the results are the option prices. I have suceeded in calculating the sample variance running 20 runs. I am not sure if there is a better alternative than the sample variance. Julien. – balteo May 31 '11 at 11:35 Agree with Val. What type of option? Why monte carlo for a european when there's a closed-form solution (BSM)? If you are questioning what your chart should look like, typically you have price on the Y-axis and run count on the X-axis. – strimp099 Aug 6 '11 at 11:59 ## 2 Answers You are typically interested in evaluating $E\left[ f(X_T)-f(\bar{X}_T^{(n)}) \right]$ (refered as the weak convergence) • $X_t$ the solution of the sde : $dX_t^x=b(X_t^x)dt+\sigma(X_t^x)dW_t$ • $\bar{X}_t^{(n)}=b(\underline{t},X_{\underline{t}}^{(n)})\cdot (t-\underline{t})+\sigma(\underline{t},X_{\underline{t}}^{(n)})\cdot (W_{\underline{t}}-W_t)$ is your Euler-continous discritized SDE, with $T/n$ your time step. under some regularity assumptions on both your SDE coefficients and payoff function $f$ , 1. The rate of convergence is $o\left(\frac{1}{n}\right)$ 2. Expansion of order $\frac{1}{n}$: $E\left[ f(X_T)-f(\bar{X}_T^{(n)}) \right]= \sum_{i=1}^R\frac{c_k}{n^k}+O(\frac{1}{n^{R+1}})$ A strong condition would be $b,\sigma,f$ are $C^\infty$ with $f$ having polynomial growth (i.e. $\exists r>0, |f(x)|\leq C\dot(1+|x|^r)$). Basically, if you don't know the true value of your Eu. option you would approximate it with $E\left[ f(\bar{X}_T^{n\approx\infty})\right]$ , and then trace $n\rightarrow E\left[ f(\bar{X}_T^{n\approx\infty})-f(\bar{X}_T^{(n)}) \right]$ and observe a $o(1/n)$ behavior (and try to guess the value of $c_1$). Note also that using the second assertion you might avoid using an estimate of your option. Indeed, consider $\bar{X}_T^{(n)}$ and $\bar{X}_T^{(2n)}$ two Euler schemes with different time steps (the second has one time more steps). Then, by applying the error expansions to the first scheme, $E\left[ f(X_T)-f(\bar{X}_T^{(n)}) \right]= \frac{c_1}{n}+O(\frac{1}{n^{2}})$ and then to the second scheme $E\left[ f(X_T)-f(\bar{X}_T^{(2n)}) \right] = \frac{c_1}{2n}+O(\frac{1}{n^{2}})$ we get, $E\left[f(\bar{X}_T^{(2n)}) - f(\bar{X}_T^{(n)}) \right] = \frac{c_1}{2n}+O(\frac{1}{n^{2}})$ Finally, without knowing the exact value of your european option ($E(f(X_T))$) you can get the exact (first order) rate of convergence, $n\rightarrow E\left[f(\bar{X}_T^{(2n)}) - f(\bar{X}_T^{(n)}) \right]= c_1/n$. Needless to say, that $c_1=n\cdot E\left[f(\bar{X}_T^{(2n)}) - f(\bar{X}_T^{(n)}) \right]$. (This useful expansion also known as the Romberg expansion is also used to build accelerated Monte-Carlo estimates, with the same notations we obtain $E(f(X_T)-2E(X_T^{2n})+E(X_T^{n})=\frac{c_2}{n^2}$) A (dated) reference would be Bally and Talay - Julien, frankly I have no idea what your research question is... Since you are quite vague in formulation your question I can only provide a vague answser... The following academic papers might be of use http://www.jstor.org/stable/1428344 http://www.math.ethz.ch/~mschweiz/Files/converge.pdf and I suggest you take a look at: Introducing Monte Carlo Methods with R by Christian P. Robert · George Casella it will give you concrete examples of applying monte carlo methodologies -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8968746662139893, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/219846/does-a-contour-of-local-extrema-of-a-function-f-mathbbr2-to-mathbbr?answertab=active
# Does a contour of local extrema of a function $f : \mathbb{R}^2 \to \mathbb{R}$ need always be smooth? Consider a smooth function $f : \mathbb{R}^2 \to \mathbb{R}$, I wonder that any contour (curve) in $\mathbb{R}^2$ where every point of it is a local maxima of $f$, need be a smooth curve? Edit : $f$ need to be smooth. Edit 2 : By contour I mean curve of nonzero arc length. Elaboration (after comments by Will and copper.hat) Let the function be $f(x,y)$. I want the contour to have at every point on it, the $\frac{\partial f}{\partial x} = 0$ and $\frac{\partial^2 f}{\partial x^2} < 0$. Is any such contour which is not smooth possible? - What sort of smoothness do you want on $f$? You could be really obnoxious and let $f$ be the characteristic function of a nonsmooth curve if you make no assumptions. – Zach L. Oct 24 '12 at 3:23 @Zach L. : Sorry, I forgot to mention the actual thing. $f$ is smooth that is $\mathcal{C}^{\infty}$ – Rajesh D Oct 24 '12 at 3:26 What do you mean by 'where every point of it is a local maxima of $f$'? The function $f(x)=-(x_1^2+x_2^2)$ has just one (local) maximum at $(0,0)$, does this constitute a smooth curve? – copper.hat Oct 24 '12 at 4:11 @copper.hat : Thanks for the comment. The curve should be of non zero arc length. – Rajesh D Oct 24 '12 at 4:15 ## 1 Answer $$f(x,y) = - x^2 y^2 {}{}{}{}{}$$ - What is the contour of maxima which is not smooth for this function? – Rajesh D Oct 24 '12 at 4:34 @RajeshD, the $x$ axis and the $y$ axis. Not smooth at the origin. – Will Jagy Oct 24 '12 at 4:36 But it doesn't have a local maximum at $(0,0)$ – Rajesh D Oct 24 '12 at 4:43 +1 It has local maxima on the axes. – copper.hat Oct 24 '12 at 5:04 – Rajesh D Oct 24 '12 at 5:22 show 6 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8850952982902527, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2846/what-is-the-difference-between-known-plaintext-attack-and-chosen-plaintext-attac/2850
# What is the difference between known-plaintext attack and chosen-plaintext attack? I am very confused between the concept of known-plaintext attack and chosen-plaintext attack. It seems to me that these two are the same thing, but it definitely is not. Can anyone explain to me how these two differ? - 1 Welcome to Cryptography Stack Exchange. Your question was migrated here because of being related to the more theoretic parts of cryptography, which are on-topic here (and not so much on Security Stack Exchange). Please register your account here, too, to be able to comment and accept an answer. – Paŭlo Ebermann♦ Jun 10 '12 at 20:31 ## 3 Answers It's the difference between an active and a passive attacker: • Known plaintext attack: The attacker knows at least one sample of both the plaintext and the ciphertext. In most cases, this is recorded real communication. If the XOR cipher is used for example, this will reveal the key as `plaintext xor ciphertext`. • Chosen plaintext attack: The attacker can specify his own plaintext and encrypt or sign it. He can carefully craft it to learn characteristics about the algorithm. For example he can provide an empty text, a text which consists of one "a", two "aa", ... For example: if the Vigenère cipher is used, it is very easy to extract the key length and recover the key by repeating one letter. So the second type of attack is a lot more powerful. - And just for completeness, note that the "known plaintext" is a special case of "chosen plaintext". – B-Con Jun 10 '12 at 23:39 @Hendrik Can known/chosen plaintext attack be carried out on Hash functions? – Pacerier Jul 3 '12 at 16:27 As others have pointed out, there are some ciphers that can be broken if all you have is a known plaintext and the ciphertext. In general, because of this, those ciphers are considered very vulnerable and are not used anywhere. Or I should say, where they are used, the keys are generated (pseudo-) randomly and only used once. However, if the attacker can choose the plaintext, more commonly used ciphers become insecure. In particular Public-Key cryptography has a glaring weakness in this regard, because signing a plaintext is exactly the same operation as decrypting a ciphertext. If an attacker can get a target to sign a message anyone encrypted with the target's public key (i.e. sent to the target) and retrieve the signature, then the attacker has recovered the plaintext of the message. This is one reason digital signature schemes are set up to only sign a hash of the message, not the entire message itself. In more sophisticated attacks, using large numbers of chosen plaintexts can reveal patterns in the ciphertext that in turn reveal some if not all of the bits of the key. Usually, though, the number of chosen plaintexts is huge: millions to trillions or more. A cipher that is vulnerable to known plaintext attacks is of course vulnerable to chosen plaintext attacks, but more importantly can be broken without any access to the encryption device. Intercepting the communications alone compromises the cipher. On the other hand, chosen plaintext attacks do require access to the encryption device, and thus are considered secure as long as the encryption device itself is secure. - “Signing a plaintext is exactly the same operation as decrypting a ciphertext”: only if you do it wrong. For example, RSA-OAEP is an asymmetric encryption algorithm, RSA-PSS is an asymmetric signature algorithm, raw RSA is not a cryptographic algorithm but a building block for one. (The padding scheme isn't the only thing that distinguishes RSA-based signature and encryption algorithm; Schneier recommends using different public exponents, and common wisdom avoids using the same key for both purposes.) – Gilles Jun 13 '12 at 14:03 A known plaintext attack is that if you know any of the plaintext that has been encrypted and have the resulting encrypted file, with a flawed encryption algorithm you can use that to break the rest of the encryption. Example: We saw this with the old pkzip encryption method. In this case if you had any of the unencrypted files in the archive, you could use that to obtain the key to break the rest. A chosen plaintext attack is the same thing except you get to choose the plaintext which can be useful. In this case the attacker determines what will be encrypted and then uses the result to determine the key (or perhaps other less useful information) of the encryption. Example: A good example here is XOR encryption. If you can choose the plaintext and get to see the result, you can use those to easily determine the key being used. You could also use a known plaintext attack with non-salted hashes. So if I choose a password and can see the resulting hash, I could search to see if there are any other similar hashes and therefore know they have the same password. So yeah they are basically the same thing, its really just a matter of what you have to work with or what you are trying to accomplish. - +1, but xor is the standard example for known-plaintext based attacks: `cyphertext = key xor plaintext` implies `key = plaintext xor cyphertext`. – Hendrik Brummermann♦ Jun 10 '12 at 22:04 If you know plaintext $X$ and its encrypted form $Y$ where $Y = X \oplus Z$, $Z$ being the key, then the key is recoverable as $X \oplus Y$, and whether you get to choose $X$ (as in a chosen plaintext attack) or just have some arbitrary $X$ that you chanced upon (as in a known plaintext attack) is completely irrelevant. You could ease your life slightly by choosing a plaintext $X = 0$ to get the key as $Y$ directly (as opposed to doing a $X\oplus Y$ as in a known plaintext attack), but I think that is not much of an advantage. – Dilip Sarwate Jun 10 '12 at 22:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397541880607605, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/06/13/zero-objects-kernels-and-cokernels/?like=1&_wpnonce=cb13afa090
# The Unapologetic Mathematician ## Zero objects, Kernels, and Cokernels A zero object in a category $\mathcal{C}$ is, simply put, both initial and terminal. Usually we’ll write $\mathbf{0}$ for a zero object, but sometimes $Z$, or even $\mathbf{1}$ in certain circumstances. While initial objects and terminal objects are both nice, zero objects are even nicer. Since a zero object is terminal, there is a unique morphism in $\hom_\mathcal{C}(X,\mathbf{0})$ for each object $X$. Since it’s initial, there’s a unique morphism in $\hom_\mathcal{C}(\mathbf{0},Y)$ for each object $Y$. Now we can put these together: for any two objects $X$ and $Y$ there is a unique morphism $0:X\rightarrow Y$ which factors through $\mathbf{0}$. Take the unique arrow from $X$ to $\mathbf{0}$, then the unique arrow from $\mathbf{0}$ to $Y$. This picks out a special element of each hom set, just for having a zero object. We saw that the trivial group $\mathbf{1}$ in the category of groups is both initial and terminal, so it’s a zero object. If we’re just looking in the category $\mathbf{Ab}$ we usually call the trivial abelian group $\mathbf{0}$, and it’s a zero object. The initial and terminal object in $\mathbf{Set}$ are different, so this category does not have a zero object. It’s often useful to remedy this last case by considering the category of “pointed” sets. A pointed set is a pair $(X,x)$ where $X$ is a set and $x\in X$ is any element of the set. A morphism of pointed sets is a function $f:X\rightarrow Y$ so that $f(x)=y$. The marked point has to go to the marked point. This gives us the category $\mathbf{pSet}$ of pointed sets. It’s easily checked that the pointed set $(\{x\},x)$ is both initial and terminal, so it is a zero object in $\mathbf{pSet}$. If a category $\mathcal{C}$ has a zero object then we have a special morphism in each hom set, as I noted above. If we have two morphisms between a given pair of objects we can ask about their equalizer and coequalizer. But now we have one for free! So given any arrow $f:X\rightarrow Y$ and the special zero arrow $0:X\rightarrow Y$, we can ask about their equalizer and coequalizer. In this special case we call them the “kernel” and “cokernel” of $f$, respectively. I’ll say more about the kernel, but you should also think about dualizing everything to talk about the cokernel. Given a morphism $f:X\rightarrow Y$ its kernel is an morphism $k:\mathrm{Ker}(f)\rightarrow X$ so that $f\circ k=0$ as morphisms from $\mathrm{Ker}(f)$ to $Y$. Also, given any other morphism $h:H\rightarrow X$ with $f\circ h=0$ we have a unique morphism $g:H\rightarrow\mathrm{Ker}(f)$ with h=k\circ g\$. This is just the definition of equalizer over again. As with equalizers, the morphism $k$ is monic, so we can view $\mathrm{Ker}(f)$ as a subobject of $X$ and $k$ as the inclusion morphism. Let’s look at this in the case of groups. If we have a group homomorphism $f:X\rightarrow Y$ the kernel will be group included into $X$ by the monomorphism $k$ — a subgroup. We also have that $f\circ k=0$, so everything that starts out in $\mathrm{Ker}(f)$ gets sent to the identity element in $Y$. If we have any other homomorphism $h:H\rightarrow X$ whose image gets sent to the identity in $Y$, then it factors through $\mathrm{Ker}(f)$, so the image of $h$ lands in the image of $k$. That is, $\mathrm{Ker}(f)$ picks out the whole subgroup of $X$ that gets sent to the identity in $Y$ under the homomorphism $f$. And that’s exactly what we called the kernel of a group homomorphism way back in February. Often enough we aren’t really interested in all equalizers — just kernels will do. So, if a category has a zero object and if every morphism has a kernel, we say that the category “has kernels”. Dually, we say it “has cokernels”. Having a zero object and having equalizers clearly implies having kernels, but it’s possible to have kernels without having all equalizers. ### Like this: Posted by John Armstrong | Category theory ## 6 Comments » 1. Maybe, when summer comes and you have more spare time, you could set up a web page with the (linked) titles of your category theory posts in a table of contents fashion. Unless you are already negotiating the book rights, that is . Comment by estraven | June 13, 2007 | Reply 2. I’ve been pondering a bit of a reorganization. WordPress defaults are a nice interface, but there are some things I’d rather were different. Unfortunately, I don’t really think I’d be that great at tweaking all the CSS on my own. So, failing a volunteer web designer coming out from the woodwork — with credit given where due, naturally — I’ll be doing what little I can on my own to organize a bit. Also, once I have some web space at Tulane I can set up such a list of topics as you suggest. Comment by | June 13, 2007 | Reply 3. Hrm. This woodwork? Oh, and if you want to, I have a wardrobe server where I’d happily host you as well. Either set you up a blog or a wiki or all of it or … well, anything you like, really. Comment by | June 14, 2007 | Reply 4. [...] our category of vector spaces has a biproduct — the direct sum — and in particular a zero object — the trivial -dimensional vector space . It also has a tensor product, which makes this a [...] Pingback by | May 19, 2008 | Reply 5. [...] should immediately ask: is this representation a zero object? Suppose we have a representation . Then there is a unique arrow sending every vector to . [...] Pingback by | December 8, 2008 | Reply 6. [...] because these are the morphisms in the category of -modules. It turns out that this category has kernels and has images. Those two references are pretty technical, so we’ll talk in more [...] Pingback by | September 29, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 59, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365237355232239, "perplexity_flag": "head"}
http://topologicalmusings.wordpress.com/2007/11/29/another-elementary-number-theory-problem/
Todd and Vishal’s blog Topological Musings Another elementary number theory problem November 29, 2007 in Problem Corner | Tags: andreescu, elementary, mathematical, number, reflections, theory, titu This one, by Dr. Titu Andreescu (of USAMO fame), is elementary in the sense that the solution to the problem doesn’t require anything more than arguments involving parity and congruences. I have the solution with me but I won’t post it on my blog until Jan 19, 2008, which is when the deadline for submission is. By the way, the problem (in the senior section) is from the $6^{th}$ issue of Mathematical Reflections, 2007. Problem: Find the least odd positive integer $n$ such that for each prime $\displaystyle p, \, \frac{n^2-1}{4} + np^4 + p^8$ is divisible by at least four (distinct) primes. • 221,437 hits 11 comments very interesting. January 17, 2008 at 3:09 am John smith I started with p=2 to see what happens. I also assumed that if i find the least value n, it would be valid for all P but of course this could be false. I found that n must satisfy n(n+1) such that there are at least 3 unique prime factors and that 2n+17 must also contain these factors. is that right ? not sure how to find n or why if i do it should be divisible by at least four (distinct) primes for ALL P though. Hint: ——- Since n is an odd positive integer, let n = 2k+1, where k is some non-negative integer. Plug this back into the expression E, and simplify E. This helps to get rid of the 4 in the denominator. Now, plug p=2, and try a few (small) values for k, and determine the smallest value of k for which E is a product of 4 prime numbers. This helps to determine the smallest n because n = 2k+1. Now, argue that for all odd primes p, the smallest value of n you found out above always makes E an integer which is divisible by four prime numbers. (This is non-trivial and requires some effort.) One last hint: —————– Work with small prime numbers in the last case above. January 17, 2008 at 8:15 pm John smith the only other clue is that E is divisble by 4 primes when p=2 and 5 when p>2? the trouble is i dont know how to determine the smallest value of k for which E is a product of 4 prime numbers. I got n(n+1)=2^a.p^b.q^c.r^d where p,q,r are prime and therefore 16(2n+17)= some powers of (2.p.q.r ) I tried combinations like 2.3.5.7 but it fails. Okay, we let n = 2k+1. Now for p=2, plug k=0,1,2,… (a few values) and determine the (smallest) value of k which ensures that E is product of four primes. (k is a small number.) Now, for p > 2, plug the above value of k (you found above) back into E, and tell me what your expression E (only in terms of p) is. January 18, 2008 at 3:29 am John smith E=30+11p^4+p^8 But I’ve got a few questions: Firstly how did you get the value n=11 by sheer trial and error? Secondly at this point do we just instinctively make a conjecture that n=11 is the smallest and go from there? I didn’t get n=11 as the least odd integer for this problem. The least n is slightly less than 11. Yes, you kinda make a conjecture (or hope) that the smallest n obtained for p=2 will also work for bigger primes p. I would also like to make a small addition to the hints I provided earlier. Verify that the smallest n you got earlier also satisfies the conditions of the problem for p=3. And, then for all primes p > 3, you argue in a general way that the smallest n indeed satisfies the conditions of the problem once again. Here, you will have to again make some conjectures about what primes will always be factors of E for p > 3. Without guessing what primes p will be factors of E, the problem will be almost impossible to solve. Lastly, making conjectures is an important part of mathematical activity. I think you probably don’t like it January 18, 2008 at 7:53 pm John smith n=9 January 18, 2008 at 8:00 pm John smith it may or may not be relevant here but is there a better way to find all n such that E for a given value,p, is divisble by 4 primes than trial error? January 18, 2008 at 8:01 pm John smith why do you think i dont like it? You are right about n=9. You are half-way through the solution! I don’t have a way to find all n such that for a given prime p, E is divisible by at least four primes. If there is one, then finding the same is going to be quite difficult, I think. But, yeah, that would certainly be a good problem! The reason I said you may not like the idea of making conjectures in this problem is at some level doing so “diminishes” the elegance of the solution. That’s my guess, and I could be very wrong. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933724045753479, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/45117/what-is-a-null-set/45126
# What is a null set? I am very confused with null sets. I get that a set which has no elements will be called a null set but I am not getting the examples given below. Please help me by explaining how $P$,Q,R\$ are null sets? Thank-you - 1 Can you please clarify further? The page you linked to at Google books isn't available to everyone (it certainly isn't to me at least, so it's probably the same for someone else) – kahen Jun 13 '11 at 13:52 @kahen: I have used the image from book now. – Fahad Uddin Jun 13 '11 at 14:02 14 That's pretty awful writing to say "The null set... is denoted by $\phi$" and then four lines down redefine it with "The set $\phi$ = {0} is not a null set". The rest of the writing isn't that good either; I would find a different textbook if I were you. – Rahul Narain Jun 13 '11 at 16:24 @Rahul, @fahad: I agree with Rahul! If this is the text you're required to use for a class, perhaps you can supplement that text with another? I'm sure there are many good suggestions available here, depending on what level you're at in your studies. – amWhy Jun 13 '11 at 23:16 4 @Doug: What would you make of a book all of whose variables, sets, functions, etc. were denoted "$\phi$" and no comment was made when switching between different meanings? No one was arguing with the statement that "$\{0\}$ is not a null set", the issue is with the unnecessary and uncommented-upon reuse of the symbol $\phi$. – Zev Chonoles♦ Jun 21 '11 at 2:57 show 1 more comment ## 3 Answers Perhaps what you find confusing is the use of set-builder notation to define $P, Q, R$: Included in between { ... } are the condition(s) that any "candidate" element must satisfy in order to be included in the set, and a set defined by set-builder notation contains all, and only, those elements satisfying all the conditions given. In each of $P,\; Q, \;R$, set-builder notation is used to provide the conditions for inclusion in each set, respectively. Note: unless otherwise stipulated, you can take conditions separated by a comma to be a conjunction of conditions; that is: $$X = \{x : \text{(condition 1), (condition 2), ...., (condition n)}\}$$ means $X$ is the set of all x such that x satisfies (condition 1) AND x satisfies (condition 2) AND ... AND x satisfies (condition n). $$P = \{x: x^2 = 4, x \text{ is odd}\}$$ The only solution to $x^2 = 4$ are $x = -2$ or $x = 2$, neither of which is odd. Hence there are $no$ elements in $P$; that is, $\;P = \varnothing$. $$Q= \{x: x^2 = 9, x \text{ is even}\}$$ The only solutions to $x^2 = 9$ are $x = -3$ or $x = 3$, neither of which is even. Hence, there are no elements in $Q$; that is, $\;Q = \varnothing$. $$R = \{x: x^2 = 9, 2x =4\}$$ $x = 2$ is the only solution to $2x = 4$, but $x = 2$ is not a solution to $x^2 = 9$, (and neither $x = 3$ nor $x = -3$ is a solution to $2x = 4$). Hence, there are no elements in $R$; that is, $\;R = \varnothing$. NOTE: As an aside, regarding notation - sometimes instead of a colon `:`preceding the defining characteristics of a given element, you'll see `|` in place of the colon. E.g., $$P = \{x: x^2 = 4, x \text{ is odd}\}\iff \{x\mid x^2 = 4, x \text{ is odd}\}$$ - 3 Thanks. Very nicely explained. It means that I can also make up my empty sets myself excluding those examples like {x:x=4 and x is a prime} like this? – Fahad Uddin Jun 13 '11 at 14:58 @fahad: Yup...exactly! – amWhy Jun 13 '11 at 15:04 1 Note that these are different ways to write down the empty set. There is only one due to extensionality. – starblue Jun 13 '11 at 18:06 @starblue: yes, absolutely you are right, there is one empty set, which can be denoted in many ways... Similarly, there is one set of integers (the set of integers), which can be denoted in many ways... – amWhy Jun 13 '11 at 18:46 I can't say that conditions separated by a comma qualify as a conjunction of conditions. If we have a conjunction of conditions, then such a conjunction qualifies as one condition. But, this doesn't come as necessary here. Where we have n conditions, each condition has to hold simultaneously. For example with {x:x^2=9, x is even} we don't have the single condition "conjunction of "x^2=9" and "x is even"," but rather that each of these conditions can get satisfied at the same time. That said, overall, nice response! – Doug Spoonwood Jun 14 '11 at 14:53 show 1 more comment A Null Set is a set with no elements. While the author of your book uses the notation $\emptyset$, I prefer to use $\{\},$ to emphasize, that the set contains nothing. The example sets $P,\ Q$ and $R$ are all null sets, because there is no $x$, that can satisfy the condition of being included in the set. - All sets have a null set as a subset of the set. so: H={1,2} would have subsets of {1},{2},{1,2}, & {}. The way I look at it is a set with literally nothing in it. So anything, even a 0, would not be a part of a null set. So like amWhy and FUZxxl have said, P, Q & R are null sets because nothing that has been difined in the original example could ever have anything in them, so it would be null. A more laymen example: If you have 5 apples and you define a subset of those apples to include all oranges that you have, it would be null because you have no oranges to put in the subset. (this is how it was explained to us in my discrete math class, lol) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9612187743186951, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/37926-exponential-function.html
Thread: 1. Exponential function How to complete the following table? And also: a) f(x) = 2x^3 + 3x^2 - 12x + 6 has critical points at x = -2 and x = 1. Use the second derivative test to determine whether these are local maximum or local minimum values. b) how to Calculate f(-2) and f(1) . Briefly explain the appropriateness of these two values Attached Thumbnails 2. Originally Posted by Snowboarder How to complete the following table? And also: a) f(x) = 2x^3 + 3x^2 - 12x + 6 has critical points at x = -2 and x = 1. Use the second derivative test to determine whether these are local maximum or local minimum values. b) how to Calculate f(-2) and f(1) . Briefly explain the appropriateness of these two values $f(x) = 2x^3 + 3x^2 - 12x + 6$ taking the derivative we get $f'(x)=6x^2+6x-12=6(x^2+x-2)=6(x+2)(x-1)$ So our critical points are -2 and 1 now taking the 2nd derivative we get $f''(x)=12x+6$ Now we check the above critical points $f''(-2)=12(-2)+6=-18$ so this is a max $f''(1)=12(1)+6=18$ so this is a min. Good luck.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7780370116233826, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/172991-linearly-independent-polynomials-answer-not-checking-out.html
# Thread: 1. ## Linearly independent polynomials; answer not checking out. Are the following vectors in $P_2$ linearly dependent? If yes, express one vector as a linear combination of the rest. ${3t+1,3t^2+1,2t^2+t+1}$ This forms the matrix $<br /> \[<br /> <br /> \left[ {\begin{array}{ccc}<br /> 0 & 3 & 2 \\<br /> 3 & 0 & 1 \\<br /> 1 & 1 & 1 \\<br /> \end{array} } \right]<br /> \]$ which is row equivalent to $\[<br /> <br /> \left[ {\begin{array}{ccc}<br /> 1 & 0 & \frac{1}{3} \\<br /> 0 & 1 & \frac{2}{3} \\<br /> 0 & 0 & 0 \\<br /> \end{array} } \right]<br /> \]$ So $c_1=\frac{r}{3},c_2=\frac{2r}{3}, c_3=-r$ is a solution for the homogeneous system, where r is any real number. Let r = 1, then $\frac{1}{3}P_1 + \frac{2}{3}P_2-1=0$ Solving for $P_1=\frac{1}{3}-\frac{2}{3}P_2$ But when I check it doesn't work: $3t+1 \not= \frac{1}{3}-\frac{2}{3}(3t^2+1)$ 2. $P_1=2t^2+t+1,P_2=3t+1,P_3=3t^2+1$. $P_1=\frac{1}{3}P_2+\frac{2}{3}P_3$. 3. $2t^2+t+1=a(3t+1)+b(3t^2+1)\Rightarrow \left\{\begin{matrix}<br /> 3b=2 \\ <br /> 3a=1 \\ <br /> a+b=1 <br /> \end{matrix}\right.$ $\Rightarrow a=\frac{1}{3},b=\frac{2}{3}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8833829164505005, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/23799/first-order-logic-vs-second-order-logic
# First-Order Logic vs. Second-Order Logic Wikipedia describes the first-order vs. second-order logic as follows: First-order logic uses only variables that range over individuals (elements of the domain of discourse); second-order logic has these variables as well as additional variables that range over sets of individuals. It gives $\forall P\,\forall x (x \in P \lor x \notin P)$ as an SO-logic formula, which makes perfect sense to me. However, in a post at CSTheory, the poster claimed that $\forall x\forall y(x=y\leftrightarrow\forall z(z\in x\leftrightarrow z\in y))$ is an FO formula. I think this must not be the case, since in the above formula, $x$ and $y$ are sets of individuals, while $z$ is an individual (and therefore this must be an SO formula). I mentioned this as a comment, but two users commented that ZF can be entirely described by FO logic, and therefore $\forall x\forall y(x=y\leftrightarrow\forall z(z\in x\leftrightarrow z\in y))$ is an FO formula. I'm confused. Could someone explain this please? - Wikipedia said it so it must be true. – Mitch Mar 15 '11 at 15:21 – Kaveh Aug 22 '12 at 23:22 ## 3 Answers It seems to me that you're confusing the first order formulas with their intended interpretations. The language of set theory consists of just a single 2 place predicate symbol, usually denoted $\in$. The statement you quote is a first order statement - it means just what is says: "for all $x$ and for all $y$, ($x = y$ iff for all $z$, ($z \in x$ iff $z \in y$))", but it does not tell you what $x$ is. When you say "but $x$ is a set and $z$ is an individual, so this statement looks second order!", you're adding an interpretation to the picture which is not specified by the first order formula alone - namely that "for all x" means "for all sets x" and "$\in$" means "the usual $\in$ in set theory". In first order logic, this "adding of interpretation" is usually called "exhibiting a model". Here's another way of looking at this same first order statement. Suppose I reinterpret things - I say "for all $x$" means "for all real numbers $x$" and $\in$ means $<$. Then, $\forall x \forall y$ ($x=y$ iff $\forall z$, ($z \in x$ iff $z \in y$)) is a true statement: it says two real numbers are equal iff they have the same collection of smaller things. Notice that in this model, nothing looks second order. By contrast, in second order logic, you are directly referring to subsets, so that in any model (that is, interpretation), $\forall S$ means "for all subsets of whatever set the variable $x$ ranges over". - – Sadeq Dousti Feb 26 '11 at 10:37 3 Let us consider some domain interpretation for your formula (for example, sets, human beings, ...). Then, the first-order variables (like x,y,z, etc.) must be interpreted as members of your domain (for example, x is a set, x is a human being, etc), while the second-order variables (like P, Q, etc) must be interpreted as subsets of your domain (for example, P is a family of sets, P is a family of human beings, etc). This is the difference. – boumol Feb 26 '11 at 13:00 – leo Apr 16 at 8:03 ## Did you find this question interesting? Try our newsletter email address The statement is indeed a first order statement in standard Set Theory. But it is no wonder you are a bit confused. Most people think of sets and elements as different things. However, in standard set theory this is not the case. In standard set theory, everything is a set (there are no "ur-elements", elements that are not sets). The objects of set theory are sets themselves. The primitive relation $\in$ is a relation between sets, not between ur-elements and sets. So, for example, the Axiom of the Power Set in ZFC states that $$\forall x\exists y(\forall z(z\in y \leftrightarrow z\subseteq x))$$ which is a first order statement, because all the things being quantified over are objects in the theory (namely, "sets"). So you need to forget the notion that "elements" are things in sets and sets are things that contain elements. In ZF, everything is a set. It's a little hard to see the distinction between first and second order statements in ZF precisely because "sets" are the objects, and moreover, given any set, there is a set that contains all the subsets (the power set). In a way, the Axioms are set up precisely to allow you to talk about collections of sets without having to go to second order logic. In ZF, to get to second order you need to start talking about "proper classes" or "properties". For instance, that's why Comprehension is not a single axiom, but an entire infinite family of axioms. Comprehension essentially says that for every property $P$ and every object $x$ of the theory (i.e., every set $x$), $\{y\mid y\in x\wedge P(y)\}$ is a set. But trying to quantify over all propositions would be a second order statement. Instead, you have an "Axiom Schema" which says that for each property $P$, you have an axiom that says $$\forall x\exists y\Bigl(z\in y\leftrightarrow\bigl(z\in x\wedge P(z)\bigr)\Bigr).$$ If you try quantifying over "all $P$", then you get a second order statement in ZF. - I think that the last paragraph is not correct, for example in set theory MK, we have two sorts, one for classes and one for proper sets, but the language is still first order. – Kaveh Feb 26 '11 at 5:15 @Kaveh: Well, it's poorly phrased. I meant to put it entirely in the context of ZF, but did not do so correctly. Thanks – Arturo Magidin Feb 26 '11 at 19:29 I don't know any second order set theory and the concept seems a little bit strange for me. Higher order theories are higher order because they restrict the possible interpretation of some sorts, e.g. the sort of subsets of the first sort needs to be really the subsets of the first sort, i.e. they enforce (semantically not syntacticly by axioms) some amount of set theory on the higher sorts in the interpretations, this makes sense for example for higher order arithmetics, but (although possible) it seems strange for a set theory to have some set theoretic restrictions on interpretations. – Kaveh Feb 27 '11 at 10:20 ps: In MK you can quantify over all properties (even those not expressible as first order formulas) by quantifying over the second sort which is classes=properties, and it is still first order. As I said in the the comment above a second or higher order logic enforces some semantical restriction on the possible interpretations of the higher sorts. – Kaveh Feb 27 '11 at 10:22 I'm sorry but I don't know or see how the question is really answered: Is there a way to read off of any given statement if it's first-order logic just by looking at the symbols, or does one have to say "Oh, and P is supposed to be a property here, and x is not." Because at first I though by looking at $\forall P\,\forall x (x \in P \lor x \notin P)$ that the sequence "$x \in P$" and the fact that never "$P\in ...$" would tell you that, but you wouldn't know from a single sentence that therefore $P$ has to be the second sort. – Nick Kidman Jul 12 '12 at 21:54 show 6 more comments Replace the non-logical relational symbol ∈ with R and the confusion goes away. Your problem arises from the fact that the symbol ∈ makes you confuse the neutral symbol of the language of set theory (which has no properties other than those expressed by axioms of the theory) with the intended interpretation of the symbol about real sets. The set theories like ZF have very strange models which has nothing to do with the intended model and interpretation of the relation $R$ has nothing to do with the real membership relation between sets. A first-order theory can have many sorts and the intended meaning of some of those sorts can be higher type objects, e.g. we can have a two sorted theory where the intended meaning of the objects of the first sort are natural numbers and the intended meaning of the objects of the second sort are sets of natural numbers. We can quantify over them and the theory is still first order, though it has two sorts. The models of this theory needs to have two sets, one for interpreting the objects of the first sort and the other for interpreting the objects of the second sort, but there does not need to be any relation between these two sets that are used for interpreting objects even if the intended interpretation for the second sort of sets of objects from the first sort. The only properties of that these two sets need to satisfy are those expressed by axioms of the theory. You can add axioms of a set theory like ZFC for the second sort and consider objects of the first sort as urelements of the set theory that satisfying some other axioms e.g. first order Peano Arithmetic. This is still a first order theory. And the theory will have models where the interpretations of the two sort are not related in the intended way, i.e. the objects of the second sort will not be the sets of objects of the first sort, and there are no way to enforce this syntactically, i.e. using axioms. A second- or higher-order theory enforces some restriction on possible interpretations of some sort which are called the higher sort. These restrictions are not syntactical, i.e. they are not axioms of theory, but semantical, i.e. we restrict the models of the theory to those that satisfy some certain conditions, e.g. the members of the set interpreting the second sort in the two sorted number theory I mentioned above are really the sets of objects of the first sort. This is like assuming some amount of set theory semantically. Without these semantical restrictions about interpretation the theory is still first order. The higher-order theories are interesting for studying things like natural numbers. The (complete/full) second-order Peano arithmetic will force the set interpreting the second sort to be exactly the powerset of the first sort. Note that this theory is not axiomatizable, i.e. there are no set of first order axioms that will capture exactly these models. (In fact even weaker version of this second-order set theory which do not require the existence of all subsets of the first sort are not axiomatizable.) On the other hand, I don't know any higher order set theory, and in fact the concept itself seems a little bit unnatural (of course it can be due to my lack of knowledge). - 1 – Sadeq Dousti Feb 27 '11 at 12:00 2 I always find SEP articles nice. The answer depends on what you want to learn, do you want to learn about higher order arithmetics or philosophy of higher order logics or descriptive complexity or ...? Enderton 2001 (textbook) and Shapiro 1991 (philosophy) are nice general readings. Simpson 1999 is the bible for second order arithmetic and its subsystems, might be too heavy if you are not going to do research in that area (but might want to read the intro part). I haven't read Väänänen 2001 but it should be a good, also Boolos 1975. Other references seems to be either old or technical papers. – Kaveh Feb 27 '11 at 12:13 Thanks a lot! – Sadeq Dousti Feb 27 '11 at 22:12 @Kaveh: +1 for your excellent answer! – user17090 Sep 29 '12 at 11:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9472895860671997, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/54912/representing-forces-as-one-forms?answertab=oldest
# Representing forces as one-forms First of all, sorry if any of those things are silly or nonsense, I'm just trying to understand better how the concepts of forms, exterior derivative and so on can be used in physics. This question arose because of my first question Interpreting Vector fields as Derivations on Physics. Well the point here is: if some force $F$ is conservative, then there's some scalar field $U$ which is the potential so that we can write $F = - \nabla U$. That's fine, it says that force is a covector, but the point is: when we start thinking about curved spaces, in general instead of talking about gradients and covectors we talk about exterior derivatives and one forms. My question then is: if a force is conservative with potential $U$ then it's correct do represent the force by the one-form obtained by the exterior derivative of the potential, in other words the form $F = -dU$ ? In second place, if the force isn't conservative, is it correct to think of it as a one form yet ? But now what's the interpretation ? I tried to give this interpretation: suppose we're dealing with some manifold $M$ and suppose that $(W,x)$ is a coordinate chart. Then $\left\{dx^i \right\}$ spans the cotangent space, and so, if we interpret some force at the point $p$ as some one form $F \in T^\ast_pM$ then we'll have $F=F_idx^i$ using the summation convention. Now if i take some vector $v \in T_pM$ we can compute $F(v) = F_idx^i(v)$, however, $dx^i(v)=v^i$ and hence $F(v)=F_iv^i$ and so my conclusion is: if I interpret force at a point as a one-form at the point, then it'll be a form that when given a vector, gives the work done moving a particle in the direction of the given vector. So if a force varies from point to point, I could represent it as a one-form field that can be integrated along some path to find the total work done. Can someone answers those points and tell me if my conclusion is correct? And again, sorry if anything here is silly, I just really don't know. - – Qmechanic♦ Apr 7 at 21:24 ## 3 Answers To understand what a Newtonian force field is, let's take a look at Newton's second law $$F = ma$$ This translates to the following differential-geometric relation $$(m\circ\dot q)^\cdot = F\circ\dot q$$ where $m:\mathrm{T}M\to \mathrm{T}^*M$ maps from velocity to momentum space and $q:I\subset\mathbb R\to M$ is the trajectory. The force field ends up being a map $$F:\mathrm{T}M\to \mathrm{T}\mathrm{T}^*M$$ Let $$\pi:\mathrm{T}^*M\to M \\ \Pi:\mathrm{T}\mathrm{T}^*M\to \mathrm{T}^*M$$ be the bundle projections. Then $$m = \Pi\circ F \\ \mathrm{T}\pi\circ F = \mathrm{id}_{\mathrm{T}M}$$ The latter equation is the equivalent of the semi-spray condition and tells us that we're dealing with a second-order field. Because the bundles $\mathrm{T}\mathrm{T}^*M$ and $\mathrm{T}^*\mathrm{T}M$ are naturally isomorphic - in coordinates, we just switch the components $(x,p;v,f)\mapsto(x,v;f,p)$ - we can represent it as a differential form on $\mathrm{T}M$, which is just the differential $\mathrm dL$ of the Lagrange function (the Euler-Lagrange equation are Newtonian equations of motion). Now, the space of Newtonian force fields doesn't come with a natural vector space structure, but rather an affine structure. You need to specify a zero force - a force of inertia - to make it into one. Such a force can for example be given by the geodesic spray of general relativity. Once that's done, you can represent the force field as a section of the pullback bundle $\tau^*(\mathrm{T}^*M)$ where $\tau:\mathrm{T}M\to M$. This is a velocity-dependant covector field, which you can indeed integrate over or derive from a potential function (in case of velocity independence). Now, for those who are uncomfortable with this level of abstraction, let's try a more hands-on approach: Geometrically, the acceleration is given by $(x,v;v,a)\in\mathrm{TT}M$. However, that space has the wrong structure - if we add two accelerations acting on the same particle, we end up with $(x,v;2v,a+a')$, which is no longer a valid acceleration. What we want instead are vectors $(x;a)\in\mathrm{T}M$ or $(x,v;a)\in\tau^*(\mathrm{T}M)$ in case of velocity-dependent accelerations, and a recipe how to get from these to our original acceleration as that's what occurs in our equation of motion. So let's assume our acceleration is velocity-independent and represented by $(x;a)\in\mathrm{T}M$. By lifting the vector vertically at $(x;v)\in\mathrm{T}M$, we arrive at $(x,v;0,a)\in\mathrm{TT}M$. What's 'missing' is the horizontal component $(x,v;v,0)\in\mathrm{TT}M$. Even though such a horizontal lift looks trivial in coordinates, it is not a 'natural' operation in differential geometry. You can fix this in two obvious ways by either providing a connection (it's trivial to see how this works out if you take the geometric approach due to Ehresmann) or by manually specifying the 'zero' acceleration due to inertia. The question that's left to answer is why we're using forces instead of accelerations, or formulated another way, why do we move to the cotangent space? From the point of view of differential geometry, one answer to that question is because we want to work with potentials, which are less complicated objects, and the differential yields covectors instead of vectors. Another point of consideration is that $\mathrm{TT^*}M$, $\mathrm{T^*T}M$ and $\mathrm{T^*T^*}M$ are naturally isomorphic, whereas $\mathrm{TT}M$ is not. These isomorphisms lead to several (more or less) equivalent formulations of analytical mechanics, including the Newtonian, Lagrangian and Hamiltonian approach. Apologies for expanding the scope of the question - feel free to ignore these ramblings ;) - Wow, very in-depth. Didn't really consider this generality. Are there any applications of this? – alexarvanitakis Feb 24 at 14:20 @Christoph, very good answer indeed, it's the kind of approach to physics that I like. Can you recommend me some book that covers more of those topics ? I mean, studying physics using rigorous math ? Thanks for your answer again. – user1620696 Feb 24 at 16:02 I like the approach you are taking but things really seem strange to me. Shouldn't the force naturally be a covector field $F:M\rightarrow T^*M$ and Newton's 2nd Law is $g(F,\cdot)=ma$ or something? Is is not natural to find a spray in this manner? – levitopher Feb 25 at 1:51 @alexarvanitakis: I'm not aware of applications; the 'interesting stuff' is normally done using the Hamiltonian or Lagrangian approach, which come with well-developed generalizations – Christoph Feb 25 at 7:36 1 @cduston: let's forget about dual spaces for now; the second-order equation $\ddot q = Y\circ\dot q$ tells us that the acceleration field $Y$ needs to be a semi-spray; the problem is that there is no natural way to get such a second-order vector field from a first-order vector field without additional structure (eg a connection) – Christoph Feb 25 at 7:44 show 4 more comments There was a rather lengthy discussion about whether force is naturally a vector or a covector over at physicsforums: http://www.physicsforums.com/showthread.php?t=666861 . If you define momentum as "that which is conjugate to position," then momentum is a covector. I.e. if you have a Lagrangian, then: $$p_\mu =\frac{\partial L}{\partial \dot{x}^\mu}$$ Force can then be interpreted as $d p_\mu / d \tau$. Or, you can define force directly from the Lagrangian as: $$F_\mu=\frac{\partial L}{\partial x^{\mu}}$$ Combined with the argument about work that you provided, where $W=\int F_{i} dx^{i}$, it seems very compelling that force should naturally be interpreted as a covector. - Everything you said is correct. If a force is not conservative then it still makes sense as an 1-form, albeit one that is not exact. Note also that the condition $\vec{\nabla} \times \vec{F} =0$ for a force to be locally determined by a potential can be written as $d F=0$ so that $F=-d U$ for some function $U$ by the Poincare lemma. More generally we have p-form potentials $A$ to which we associate p+1-form field strengths $dF$. E.g in electromagnetism (again!) we can combine the vector and scalar potentials into a 1-form on spacetime (3+1=4) and the resulting field strength tensor is this one. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399128556251526, "perplexity_flag": "head"}
http://johncarlosbaez.wordpress.com/2012/08/21/network-theory-part-23/
# Azimuth ## Network Theory (Part 23) We’ve been looking at reaction networks, and we’re getting ready to find equilibrium solutions of the equations they give. To do this, we’ll need to connect them to another kind of network we’ve studied. A reaction network is something like this: It’s a bunch of complexes, which are sums of basic building-blocks called species, together with arrows called transitions going between the complexes. If we know a number for each transition describing the rate at which it occurs, we get an equation called the ‘rate equation’. This describes how the amount of each species changes with time. We’ve been talking about this equation ever since the start of this series! Last time, we wrote it down in a new very compact form: $\displaystyle{ \frac{d x}{d t} = Y H x^Y }$ Here $x$ is a vector whose components are the amounts of each species, while $H$ and $Y$ are certain matrices. But now suppose we forget how each complex is made of species! Suppose we just think of them as abstract things in their own right, like numbered boxes: We can use these boxes to describe states of some system. The arrows still describe transitions, but now we think of these as ways for the system to hop from one state to another. Say we know a number for each transition describing the probability per time at which it occurs: Then we get a ‘Markov process’—or in other words, a random walk where our system hops from one state to another. If $\psi$ is the probability distribution saying how likely the system is to be in each state, this Markov process is described by this equation: $\displaystyle{ \frac{d \psi}{d t} = H \psi }$ This is simpler than the rate equation, because it’s linear. But the matrix $H$ is the same—we’ll see that explicitly later on today. What’s the point? Well, our ultimate goal is to prove the deficiency zero theorem, which gives equilibrium solutions of the rate equation. That means finding $x$ with $Y H x^Y = 0$ Today we’ll find all equilibria for the Markov process, meaning all $\psi$ with $H \psi = 0$ Then next time we’ll show some of these have the form $\psi = x^Y$ So, we’ll get $H x^Y = 0$ and thus $Y H x^Y = 0$ as desired! So, let’s get to to work. ### The Markov process of a graph with rates We’ve been looking at stochastic reaction networks, which are things like this: However, we can build a Markov process starting from just part of this information: Let’s call this thing a ‘graph with rates’, for lack of a better name. We’ve been calling the things in $K$ ‘complexes’, but now we’ll think of them as ‘states’. So: Definition. A graph with rates consists of: • a finite set of states $K,$ • a finite set of transitions $T,$ • a map $r: T \to (0,\infty)$ giving a rate constant for each transition, • source and target maps $s,t : T \to K$ saying where each transition starts and ends. Starting from this, we can get a Markov process describing how a probability distribution $\psi$ on our set of states will change with time. As usual, this Markov process is described by a master equation: $\displaystyle{ \frac{d \psi}{d t} = H \psi }$ for some Hamiltonian: $H : \mathbb{R}^K \to \mathbb{R}^K$ What is this Hamiltonian, exactly? Let’s think of it as a matrix where $H_{i j}$ is the probability per time for our system to hop from the state $j$ to the state $i.$ This looks backwards, but don’t blame me—blame the guys who invented the usual conventions for matrix algebra. Clearly if $i \ne j$ this probability per time should be the sum of the rate constants of all transitions going from $j$ to $i$: $\displaystyle{ i \ne j \quad \Rightarrow \quad H_{i j} = \sum_{\tau: j \to i} r(\tau) }$ where we write $\tau: j \to i$ when $\tau$ is a transition with source $j$ and target $i.$ Now, we saw in Part 11 that for a probability distribution to remain a probability distribution as it evolves in time according to the master equation, we need $H$ to be infinitesimal stochastic: its off-diagonal entries must be nonnegative, and the sum of the entries in each column must be zero. The first condition holds already, and the second one tells us what the diagonal entries must be. So, we’re basically done describing $H.$ But we can summarize it this way: Puzzle 1. Think of $\mathbb{R}^K$ as the vector space consisting of finite linear combinations of elements $\kappa \in K.$ Then show $\displaystyle{ H \kappa = \sum_{s(\tau) = \kappa} r(\tau) (t(\tau) - s(\tau)) }$ ### Equilibrium solutions of the master equation Now we’ll classify equilibrium solutions of the master equation, meaning $\psi \in \mathbb{R}^K$ with $H \psi = 0$ We’ll do only do this when our graph with rates is ‘weakly reversible’. This concept doesn’t actually depend on the rates, so let’s be general and say: Definition. A graph is weakly reversible if for every edge $\tau : i \to j,$ there is directed path going back from $j$ to $i,$ meaning that we have edges $\tau_1 : j \to j_1 , \quad \tau_2 : j_1 \to j_2 , \quad \dots, \quad \tau_n: j_{n-1} \to i$ This graph with rates is not weakly reversible: but this one is: The good thing about the weakly reversible case is that we get one equilibrium solution of the master equation for each component of our graph, and all equilibrium solutions are linear combinations of these. This is not true in general! For example, this guy is not weakly reversible: It has only one component, but the master equation has two linearly independent equilibrium solutions: one that vanishes except at the state 0, and one that vanishes except at the state 2. The idea of a ‘component’ is supposed to be fairly intuitive—our graph falls apart into pieces called components—but we should make it precise. As explained in Part 21, the graphs we’re using here are directed multigraphs, meaning things like $s, t : E \to V$ where $E$ is the set of edges (our transitions) and $V$ is the set of vertices (our states). There are actually two famous concepts of ‘component’ for graphs of this sort: ‘strongly connected’ components and ‘connected’ components. We only need connected components, but let me explain both concepts, in a futile attempt to slake your insatiable thirst for knowledge. Two vertices $i$ and $j$ of a graph lie in the same strongly connected component iff you can find a directed path of edges from $i$ to $j$ and also one from $j$ back to $i.$ Remember, a directed path from $i$ to $j$ looks like this: $i \to a \to b \to c \to j$ Here’s a path from $x$ to $y$ that is not directed: $i \to a \leftarrow b \to c \to j$ and I hope you can write down the obvious but tedious definition of an ‘undirected path’, meaning a path made of edges that don’t necessarily point in the correct direction. Given that, we say two vertices $i$ and $j$ lie in the same connected component iff you can find an undirected path going from $i$ to $j.$ In this case, there will automatically also be an undirected path going from $j$ to $i.$ For example, $i$ and $j$ lie in the same connected component here, but not the same strongly connected component: $i \to a \leftarrow b \to c \to j$ Here’s a graph with one connected component and 3 strongly connected components, which are marked in blue: For the theory we’re looking at now, we only care about connected components, not strongly connected components! However: Puzzle 2. Show that for weakly reversible graph, the connected components are the same as the strongly connected components. With these definitions out of way, we can state today’s big theorem: Theorem. Suppose $H$ is the Hamiltonian of a weakly reversible graph with rates: Then for each connected component $C \subseteq K,$ there exists a unique probability distribution $\psi_C \in \mathbb{R}^K$ that is positive on that component, zero elsewhere, and is an equilibrium solution of the master equation: $H \psi_C = 0$ Moreover, these probability distributions $\psi_C$ form a basis for the space of equilibrium solutions of the master equation. So, the dimension of this space is the number of components of $K.$ Proof. We start by assuming our graph has one connected component. We use the Perron–Frobenius theorem, as explained in Part 20. This applies to ‘nonnegative’ matrices, meaning those whose entries are all nonnegative. That is not true of $H$ itself, but only its diagonal entries can be negative, so if we choose a large enough number $c > 0,$ $H + c I$ will be nonnegative. Since our graph is weakly reversible and has one connected component, it follows straight from the definitions that the operator $H + c I$ will also be ‘irreducible’ in the sense of Part 20. The Perron–Frobenius theorem then swings into action, and we instantly conclude several things. First, $H + c I$ has a positive real eigenvalue $r$ such that any other eigenvalue, possibly complex, has absolute value $\le r.$ Second, there is an eigenvector $\psi$ with eigenvalue $r$ and all positive components. Third, any other eigenvector with eigenvalue $r$ is a scalar multiple of $\psi.$ Subtracting $c,$ it follows that $\lambda = r - c$ is the eigenvalue of $H$ with the largest real part. We have $H \psi = \lambda \psi,$ and any other vector with this property is a scalar multiple of $\psi.$ We can show that in fact $\lambda = 0.$ To do this we copy an argument from Part 20. First, since $\psi$ is positive we can normalize it to be a probability distribution: $\displaystyle{ \sum_{i \in K} \psi_i = 1 }$ Since $H$ is infinitesimal stochastic, $\exp(t H)$ sends probability distributions to probability distributions: $\displaystyle{ \sum_{i \in K} (\exp(t H) \psi)_i = 1 }$ for all $t \ge 0.$ On the other hand, $\displaystyle{ \sum_{i \in K} (\exp(t H)\psi)_i = \sum_{i \in K} e^{t \lambda} \psi_i = e^{t \lambda} }$ so we must have $\lambda = 0.$ We conclude that when our graph has one connected component, there is a probability distribution $\psi \in \mathbb{R}^K$ that is positive everywhere and has $H \psi = 0.$ Moreover, any $\phi \in \mathbb{R}^K$ with $H \phi = 0$ is a scalar multiple of $\psi.$ When $K$ has several components, the matrix $H$ is block diagonal, with one block for each component. So, we can run the above argument on each component $C \subseteq K$ and get a probability distribution $\psi_C \in \mathbb{R}^K$ that is positive on $C.$ We can then check that $H \psi_C = 0$ and that every $\phi \in \mathbb{R}^K$ with $H \phi = 0$ can be expressed as a linear combination of these probability distributions $\psi_C$ in a unique way.   █ This result must be absurdly familiar to people who study Markov processes, but I haven’t bothered to look up a reference yet. Do you happen to know a good one? I’d like to see one that generalizes this theorem to graphs that aren’t weakly reversible. I think I see how it goes. We don’t need that generalization right now, but it would be good to have around. ### The Hamiltonian, revisited One last small piece of business: last time I showed you a very slick formula for the Hamiltonian $H.$ I’d like to prove it agrees with the formula I gave this time. We extend $s$ and $t$ to linear maps between vector spaces: We define the boundary operator just as we did last time: $\partial = t - s$ Then we put an inner product on the vector spaces $\mathbb{R}^T$ and $\mathbb{R}^K.$ So, for $\mathbb{R}^K$ we let the elements of $K$ be an orthonormal basis, but for $\mathbb{R}^T$ we define the inner product in a more clever way involving the rate constants: $\displaystyle{ \langle \tau, \tau' \rangle = \frac{1}{r(\tau)} \delta_{\tau, \tau'} }$ where $\tau, \tau' \in T.$ This lets us define adjoints of the maps $s, t$ and $\partial,$ via formulas like this: $\langle s^\dagger \phi, \psi \rangle = \langle \phi, s \psi \rangle$ Then: Theorem. The Hamiltonian for a graph with rates is given by $H = \partial s^\dagger$ Proof. It suffices to check that this formula agrees with the formula for $H$ given in Puzzle 1: $\displaystyle{ H \kappa = \sum_{s(\tau) = \kappa} r(\tau) (t(\tau) - s(\tau)) }$ Here we are using the complex $\kappa \in K$ as a name for one of the standard basis vectors of $\mathbb{R}^K.$ Similarly shall we write things like $\tau$ or $\tau'$ for basis vectors of $\mathbb{R}^T.$ First, we claim that $\displaystyle{ s^\dagger \kappa = \sum_{\tau: \; s(\tau) = \kappa} r(\tau) \, \tau }$ To prove this it’s enough to check that taking the inner products of either sides with any basis vector $\tau',$ we get results that agree. On the one hand: $\begin{array}{ccl} \langle \tau' , s^\dagger \kappa \rangle &=& \langle s \tau', \kappa \rangle \\ \\ &=& \delta_{s(\tau'), \kappa} \end{array}$ On the other hand: $\begin{array}{ccl} \displaystyle{ \langle \tau', \sum_{\tau: \; s(\tau) = \kappa} r(\tau) \, \tau \rangle } &=& \sum_{\tau: \; s(\tau) = \kappa} r(\tau) \, \langle \tau', \tau \rangle \\ \\ &=& \displaystyle{ \sum_{\tau: \; s(\tau) = \kappa} \delta_{\tau', \tau} } \\ \\ &=& \delta_{s(\tau'), \kappa} \end{array}$ where the factor of $1/r(\tau)$ in the inner product on $\mathbb{R}^T$ cancels the visible factor of $r(\tau).$ So indeed the results match. Using this formula for $s^\dagger \kappa$ we now see that $\begin{array}{ccl} H \kappa &=& \partial s^\dagger \kappa \\ \\ &=& \partial \displaystyle{ \sum_{\tau: \; s(\tau) = \kappa} r(\tau) \, \tau } \\ \\ &=& \displaystyle{ \sum_{\tau: \; s(\tau) = \kappa} r(\tau) \, (t(\tau) - s(\tau)) } \end{array}$ which is precisely what we want.   █ I hope you see through the formulas to their intuitive meaning. As usual, the formulas are just a way of precisely saying something that makes plenty of sense. If $\kappa$ is some state of our Markov process, $s^\dagger \kappa$ is the sum of all transitions starting at this state, weighted by their rates. Applying $\partial$ to a transition tells us what change in state it causes. So $\partial s^\dagger \kappa$ tells us the rate at which things change when we start in the state $\kappa.$ That’s why $\partial s^\dagger$ is the Hamiltonian for our Markov process. After all, the Hamiltonian tells us how things change: $\displaystyle{ \frac{d \psi}{d t} = H \psi }$ Okay, we’ve got all the machinery in place. Next time we’ll prove the deficiency zero theorem! This entry was posted on Tuesday, August 21st, 2012 at 7:03 am and is filed under chemistry, mathematics, networks, probability. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 9 Responses to Network Theory (Part 23) 1. Arrow says: I think $H_{ij}$ explanation has a typo since “the probability per time for our system to hop from the i state to the state j” doesn’t look backwards. • John Baez says: Yes, I guess my subconscious just couldn’t stomach the truth: $H_{i j}$ is really the probability per time for our system to hop from the state $j$ to the state $i.$ Thanks—fixed! 2. Blake Pollard says: For the puzzle: For any two vertices $i$ and $j$ in a connected component of a weakly reversible graph we have an undirected path going from $i$ to $j$, a set of edges or transitions with sources and targets. $\{ \tau_{j1} \tau_{jn} \}$. Starting with the edge attached to $j$, $\tau_{j1}$, if $t(\tau_{j1}) = j$ then move to $s(\tau_{j1})$, if not then $s(\tau_{j1}) = j$. Since the graph is weakly reversible there exists a directed path from $t(\tau_{j1})$ to $s(\tau_{j1}) = j$, call this directed path $\tau'_{j1}$ (could be a composition of several transitions). Again move on to $s(\tau'_{j1})$. Repeat until $s(\tau_jk)=i$ and then the set of primed and unprimed transitions $\tau'_{j1} ... \tau_{jk}$ is a directed path from $i$ to $j$ and since the graph is weakly reversible we have a directed path path back from $j$ to $i$ for every pair of vertices in the connected component, hence it is strongly connected. What you really run into is exactly what you see in your example of an undirected path, namely vertices in the undirected path that are only the source or only the target for two different transitions, using weak reversibility on the one of these that goes with your ordering you end up with an ordered path. It’s interesting you can either choose to go along and use weak reversibility at each vertex to initially create a directed path back from $j$ to $i$ or you can do what I did and use weak reversibility to direct your undirected path and then use weak reversibility again to show that component is strongly connected. Of course you need both for strongly connected. • Blake Pollard says: sorry first time latex on here… and you missed a latex right at the end of the post. • John Baez says: Thanks for catching that—it’s those last-minute afterthoughts that get me, every time. By the way, stuff like \usepackage, or macros don’t work here. You get what you get and that’s all you get. According to WordPress the blog comes pre-equipped with amsmath, amsfonts and amssymb. But they don’t list all the stuff that doesn’t work. All sorts of fancy formatting commands don’t work. So don’t push your luck. • John Baez says: If I understand your answer correctly, Blake, it sounds right. Let me say it my way. We’ve got a connected component of a weakly reversible graph, and we’re trying to show it’s strongly connected. So, given two vertices v and w, we know there’s an undirected path of edges from v to w, and we’re trying to show there’s a directed one. Look at each edge in this path—say the edge between some vertex x and the next vertex on the path, y. Either it’s pointing the right way—from x to y—or it’s not. If it’s pointing the right way, don’t mess with it. If it’s pointing the wrong way—from y to x—we know by weak reversibility that there’s a directed path going back from x to y. So, replace this edge by that directed path. We thus get a directed path from v to w, built from the edges in the original path that we pointing the right direction, and the directed paths we used to replace the edges that were pointing the wrong direction. (Look, ma—no subscripts!) By the way, maybe it’s time to announce that Blake Pollard is now starting grad school at U.C. Riverside! When are you actually going there, Blake? • Blake says: I’m still working out here in Hawaii for about a month, staying island-side right up until the last minute! So I’ll be getting to Riverside probably the 21st of next month, just before classes start. Looking forward to it. As for the puzzle I like your explanation better, I was trying to warm up my confusing math speak since its a bit rusty. The switching of certain edges with directed paths reminds me of time-ordering operators in QFT, except there you just switch the order of terms rather than replacing them with possibly more terms. • John Baez says: I’m showing up in Riverside on September 21st as well! See you in my ‘Math of Climate Science’ seminar. 3. John Baez says: Let me record here my guess about equilibria for general Markov processes on finite sets, where we drop the ‘weak reversibility’ assumption. Someone must know this already, and I’d love to see a reference. A general directed multigraph looks a bit like this: Actually this is just a directed graph: it doesn’t have multiple edges going from one vertex to another. It also doesn’t have edges from a vertex to itself. But as I’ve explained, these features are irrelevant for the Markov process! When studying Markov processes, it’s enough to consider directed graphs with positive numbers labelling their edges. The strongly connected components are shaded in blue. If we collapse each strongly connected component to a point, and combine all edges from one component to another into a single edge, we get a directed acyclic graph, meaning a directed graph without any directed cycles. What can the equilibria of the Markov process look like? In the terminology of this post, an equilibrium is a probability distribution $\psi$ with $H \psi = 0$ A directed acyclic graph gives a partial order on the set of vertices, where v ≤ w exactly when there exists a directed path from v to w. Let’s say a vertex v is maximal if there’s no vertex w with v < w. So, there's a partial order on the strongly connected components of a directed graph, and we can talk about a 'maximal' strongly connected component. In this picture: the strongly connected component containing f and g is maximal. If you imagine what happens with a probability distribution as it evolves in time according to a Markov process with the graph, you’ll see that probability will flow into this component, but never leave it. In general there could be a bunch of maximal strongly connected components. Probability will flow into these, but never leave. For this reason, I think it’s obvious that an equilibrium $\psi$ can only be nonzero on the strongly connected components that are maximal. So what are these like? We can write the equilibrium $\psi$ as a sum of pieces, each supported on a different maximal strongly connected component. (By ‘supported on’ I mean that it’s zero outside this component.) Each of these pieces will itself be an equilibrium—since no probability can flow out, and none will be flowing in from other maximal strongly connected components, either. So, what’s an equilibrium like if it’s supported on just a single maximal strongly connected component? Since each strongly connected component is connected and weakly reversible, the theorem I described in this post says there’s exactly one equilibrium probability distribution $\psi$ supported on this component. It’s positive everywhere, and every equilibrium $\psi$ is a scalar multiple of this. Conclusion: for a general Markov process on a finite set, we get one equilibrium probability distribution supported on each maximal strongly connected component. Every equilibrium $\psi$ is a linear combination of these. They form a basis.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 184, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383346438407898, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/194567-distance-between-sets.html
# Thread: 1. ## Distance between sets (metric spaces) Hi there. I was working on this problem. And I wanted to know if my demonstration is right, I'm not sure if it's complete. The problem says: The distance D(A,B) between two nonempty subsets A and B of a metric space (X,d) is defined to be: $D(A,B)=inf d(a,b),a\in{A},b\in{B}$ Inf denotes the infimus. Show that D does not define a metric on the power set of X. Well, so what I considered is that if a belongs to A, and b belongs to B, but a doesn't belongs to B, and b doesn't belongs to A (i.e. A and B has no elements in common), then the distance could never be zero. But I don't know if this is enough to show that it doesn't define a metric. Anyway, I could show using the sets $A={0,1,2,3...}, B={-1,1,-1,...,(-1)^n},C={0,-1,-2,...}$ That the triangle inequality insn't accomplished, so I have a counter example. Bye there, and thanks in advance. 2. ## Re: Distance between sets (metric spaces) If $A\cap B\neq \emptyset$, then $D(A,B)=0$, but we may have this equality even if $A\neq B$. The "distance" can be $0$ even if $A$ and $B$ are disjoint: take $X=\mathbb R^2$, $A=\left\{\left(x,\frac 1x\right),x>0\right\}$ and $B=\mathbb R\times \{0\}$. 3. ## Re: Distance between sets (metric spaces) Ok. Then I could use the sets: $A= \left { 0,0,0,0... \right }, B= \left { -1,1,-1,...,(-1)^n \right },C= \left { 0,-1,-2,... \right }$, with the metric d(x,y)=|x-y| to show that the triangle inequality isn't accomplished, right? I think I got, I've noted it just after I've posted, didn't proved it yet, but I think it can be done easily. I've defined the metric between the elements of the subsets as d(x,y)=|x-y|, then D(A,B)=1 and D(A,C)=D(C,B)=0, so the triangle inequality isn't accomplished. Is this right? Thank you. 4. ## Re: Distance between sets (metric spaces) Originally Posted by Ulysses Ok. Then I could use the sets: $A= \left { 0,0,0,0... \right }, B= \left { -1,1,-1,...,(-1)^n \right },C= \left { 0,-1,-2,... \right }$, with the metric d(x,y)=|x-y| to show that the triangle inequality isn't accomplished, right? I think I got, I've noted it just after I've posted, didn't proved it yet, but I think it can be done easily. The point is: it is totally unnecessary to do any more. The problem is done. It fails the "zero test". So it is not a metric. 5. ## Re: Distance between sets (metric spaces) With the "zero test" you mean the first proof I approached was right? I thought so, because the null element must always be present in a definite space, right? like in algebra. 6. ## Re: Distance between sets (metric spaces) "zero test" means girdav's first post, saying that, "d(A,B)=0 doesn't mean A=B", this get d fail to be a metric. 7. ## Re: Distance between sets (metric spaces) Alright. But is any of my proofs right? I appreciate his work, but is not that intuitive to me. Anyway, I'll take another look at it, but I'd like to know if what I did was right, or if there is any inconsistency on the proofs I've attempted. In the first place, the case I thought of, of A and B having no elements in common, so zero isn't an element of the distance, would that be enough? or doesn't matter at all? and in the other hand, the three sets I've used to proof that it doesn't accomplish the triangle inequality is right? I see more clearly what he did now. The distance is zero, because x is infinitely close to zero, right? 8. ## Re: Distance between sets (metric spaces) Originally Posted by Ulysses Alright. But is any of my proofs right? Yes, your counterexample in answer #3 is valid (but please, write correctly the sets!). That is, if $A=\{0\}$ , $B=\{-1,1\}$ and $C=\{0,-1,-2,\ldots\}$ , then $D(A,B)=1$ , $D(A,C)=D(C,B)=0$ hence, $D(A,B)\not\leq D(A,C)+D(C,B)$ , so the triangle inequality is not satisfied. 9. ## Re: Distance between sets (metric spaces) Ty Fernando sorry for that, the thing is that I choose other sets at first, and then "corrected it", because one of the set I choose at first didn't work as I expected, and then I realize I should use the zero set. And the {} didn't know why it wasn't working (I see now I wasn't using the code properly). Bye there! 10. ## Re: Distance between sets (metric spaces) use "\{\}" for $\{\}$ 11. ## Re: Distance between sets (metric spaces) I have a few more doubts, Originally Posted by girdav If $A\cap B\neq \emptyset$, then $D(A,B)=0$, but we may have this equality even if $A\neq B$. The "distance" can be $0$ even if $A$ and $B$ are disjoint: take $X=\mathbb R^2$, $A=\left\{\left(x,\frac 1x\right),x>0\right\}$ and $B=\mathbb R\times \{0\}$. Here, the cartesian product: $B=\mathbb R\times \{0\}$, which is the product of all the elements of R with zero, is this equal to $B=\{0\}$ I think I understand now, as you defined X in R^2, you must have R^2, right? would it be a mistake to consider an element in R and an element in R^2? And in the other hand, I wanted to know if the empty set is equal to $B=\{0\}$, I mean, if the empty set is equal to the set which it's unic element is zero (I think it's not, but one of my companions asked me, and I wasn't completly sure). Thank you. 12. ## Re: Distance between sets (metric spaces) Originally Posted by Ulysses I have a few more doubts,Here, the cartesian product: ma $B=\mathbb R\times \{0\}$, which is the product of all the elements of R with zero, is this equal to $B=\{0\}$ I think I understand now, as you defined X in R^2, you must have R^2, right? would it be a mistake to consider an element in R and an element in R^2? Nothing to do $B=\mathbb R\times \{0\}$ with $C=\{0\}$ . For example, $B$ has infinite elements and $C$ only one, besides, every element of $B$ has the form $(x,0)$ which is different from $0$ . And in the other hand, I wanted to know if the empty set is equal to $B=\{0\}$, I mean, if the empty set is equal to the set which it's unic element is zero (I think it's not, but one of my companions asked me, and I wasn't completly sure). $B=\{0\}$ is a set with only one element: $0$ . The empty set is a set with no elements: $\emptyset=\{\;\}$ . So, $B\neq \emptyset$ .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 46, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9714105129241943, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/35236?sort=newest
# Background The complexity classes BPP, BQP, and QMA are defined semantically. Let me try to explain a little bit what is the difference between a semantic definition and a syntactic one. The complexity class P is usually defined as the class of languages accepted in polynomial time by a deterministic Turing machine. Although it seems to be a semantic definition at first, $P$ has an easy syntactic characterization, i.e. deterministic Turing machines with a clock counting the steps up to a fixed polynomial (take a deterministic Turing machine, add a polynomial clock to it such that the new machine will calculate the length of the input $n$, then the value of the polynomial $p(n)$, and simulate the original machine for $p(n)$ steps. The languages accepted by these machines will be in $P$ and there is at least one such machine for each set in $P$). There are also other syntactic characterizations for $P$ in descriptive complexity like $FO(LFP)$, first-order logic with the least fixed point operator. The situation is similar for PP. Having a syntactic characterization is useful, for example a syntactic characterization would allow us to enumerate the sets in the class effectively, and if the enumeration is efficient enough, we can diagonalize against the class to obtain a separation result like time and space hierarchy theorems. My main question is: Is there a syntactic characterization for BPP, BQP, or QMA? I would also like to know about any time or space hierarchy theorem for semantic classes mentioned above. The motivation for this question came from here. I used Google Scholar, the only result that seemed to be relevant was a citation to a master's thesis titled "A logical characterization of the computational complexity class BPP and a quantum algorithm for concentrating entanglement", but I was not able find an online version of it. - When you say, "we can diagonalize against the class to obtain a separation result..", please clarify. I know we can diagonalize against the class TIME(f(n)) to separate TIME(f) from TIME(g) where g = $\omega(f)$ (up to logarithmic terms), but is there such a diagonalization we can use against P to separate it from something else? – Henry Yuen Aug 11 2010 at 17:10 Also, this is not a real answer, but using reasonable complexity assumptions, BPP = P (the result due to Implagiazzo and Wigderson), of course BPP would then be a syntactic class. What would be interesting would be to show that finding a syntactic characterization of BPP would imply derandomization of BPP. – Henry Yuen Aug 11 2010 at 17:14 @Henry Yuen: We can prove P is not equal to EXP, but this also follows from the time hierarchy theorem. A more interesting question would be if there is a result that does not follow from the hierarchy theorems, but I can't think of one right now. Diagonalization results I can think of right now use simulation and therefore it seems to me that the strongest way of stating such a theorem will be w.r.t. the resource, but I may be wrong. This seems to be interesting and I will check if I can find a counterexample to my intuition. – Kaveh Aug 11 2010 at 17:30 @Henry Yuen: Your second comment is also interesting, though I can't think of any reason why a syntactic characterization would imply derandomization by itself. – Kaveh Aug 11 2010 at 17:32 3 @Henry: "finding a syntactic characterization of BPP would imply derandomization of BPP" would be very cool to prove. No idea how to prove it though. However (if I understand what you mean by "syntactic characterization") a syntactic characterization would imply a time hierarchy for BPP. I think the survey in Robin's nice answer covers this. – Ryan Williams Aug 12 2010 at 3:09 show 1 more comment ## 2 Answers No, I don't think any syntactic characterization is known for BPP, BQP or QMA. (BPP might turn out to be P, and then we'd have such a characterization of course.) In particular we don't know any languages that are complete for either of these classes. A lot of people believe that classes like QMA do not even have complete languages. (See John Watrous' survey, where he says that "indeed it would be surprising if QMA were shown to have a complete problem having a vacuous promise.") There are hierarchy theorems for BPP with 1 bit of advice, but I don't think we have any for BPP, BQP or QMA. For the advice-based results, see Hierarchy Theorems for Probabilistic Polynomial Time. - well, local hamiltonian is complete for QMA, but it is a promise problem. Also, 5-QSAT is complete. As Watrous puts it, "vacuous promise" which means "decision problem". So, it is not expected that a complete decision problem exists for any semantic class. – Marcos Villagra Aug 12 2010 at 1:38 "Language" traditionally means decision problem. That's why I said none of these classes are known to have complete languages. They all have complete promise problems, of course. – Robin Kothari Aug 12 2010 at 2:46 Thanks for the articles. – Kaveh Aug 13 2010 at 12:15 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is more a comment than an answer (since I can't leave comments, yet): I've looked into this question briefly this past winter. As far as I know there is no syntactic definitions of BPP, BQP, or QMA. If you introduce post selection to BQP then you have a syntactic definition, but that is only because PostBQP = PP and PP is syntactic. @Henry Yuen I also don't understand why a syntactic definition of anything would imply derandomization... of course if BPP was also FOL + LFP then we would have derandomization but if BPP was FOL + other gadget then we would not know that without proving that LFP and the other gadget do the same things. - Artem, I don't understand why it would imply derandomization either, but as Ryan Williams said, it would be a very cool thing to prove. In my view, having that sort of thing would mean, informally, that all roads lead to derandomization of BPP, or that BPP is bound to be derandomized. Not only are some reasonable circuit lower bounds conditions for derandomization, but also the property of having a "syntactic characterization" (whatever that may mean formally)? That seems quite special. Well - this is all just mindsand, of course. – Henry Yuen Aug 12 2010 at 6:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489716291427612, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/92929/list
## Return to Question 6 added 142 characters in body Let $C$ be a symmetric monoidal category, and $f : x \to y$ be a morphism in $C$. I would like to construct the localization $C_f$ explicitly, which solves the universal property $$\mathrm{Hom}_{\otimes}(C_f,D) = \{F \in \mathrm{Hom}_{\otimes}(C,D) : F(f) \text{ iso}\}.$$ I am not interested in a general existence proof or alike; instead I would like to exhibit $C_f$ as an explicit full reflective subcategory of $C$, thereby also showing its existence. The reason is that I want to actually compute something in these localizations which a priori does not simply follow from the universal property. There is a general construction of a localization of a plain category with respect to arbitrary sets of morphisms, which can be found on page 6 of Gabriel-Zisman's Calculus of fractions and homotopy theory. Thus, an object of the localization $C_f$ is an object of $C$, and a morphism is a class of a finite sequence of the form $f_1 s^{-1} f_2 \cdots s^{-1} f_n$ (perhaps without $f_1$ or $f_n$), where the sources and targets should fit, subject to the obvious cancellation rules. But this might not define a (small) set of morphisms, right? Question 1. Which conditions have to be imposed on $C$ and $f$ so that $C_f$ exists (without leaving the universe)? I know the basics about left/right multiplicative systems (as in Gabriel-Zisman, Weibel, Kashiwara-Schapira, etc.), saturations etc., but I could not find any answer to this question in the literature. EDIT: As Theo points out, there is no set-theoretic problem if we localize a category at just one single morphism. But when $C$ is monoidal, there is no reason why the tensor product $C \times C \to C$ extends to a tensor product $C_f \times C_f \to C_f$, because this means that for every $x \in C$ the invertibility of $f$ forces the invertibility of $x \otimes f$ and $f \otimes x$ in the language of categories, which is unplausible. Instead we should better localize at all morphisms $x \otimes f$ and $f \otimes x$, where $x$ runs through all objects of $C$; this is a monoidal class in the language of Day's paper "A Note on Monoidal Localisation". But now there are set-theoretic problems in the description of $C_f$ above. On the other hand, we repair this easily if $C$ has a small colimit-dense subcategory, which happens to be the case when $C$ is presentable. Question 2. Which conditions have to be imposed on $C$ and $f$ so that we can write down explicitly the localization $C_f$ in the $2$-category of symmetric monoidal categories? How does it look like? Question 3. Actually I am interested in cocomplete symmetric monoidal categories and cocontinuous symmetric monoidal functors between them. How does the localization look like in this context? Let me mention a special case where everything works out: Let $\mathcal{L} \in C$ be an object whose symmetry $\mathcal{L}^{\otimes 2} \to \mathcal{L}^{\otimes 2}$ is the identity and $f : 1_C \to \mathcal{L}$ a morphism (imagine a global section of a line bundle on a scheme). Let $C_f \subseteq C$ the full subcategory consisting of those $M \in C$ such that $M \otimes f : M \to M \otimes \mathcal{L}$ is an isomorphism. If $C$ is cocomplete, the inclusion $C_f \subseteq C$ has a left adjoint: It maps $M \in C$ to the colimit of $M \to M \otimes \mathcal{L} \to M \otimes \mathcal{L}^{\otimes 2} \to \dotsc$. Using this left adjoint, one can define tensor products and colimits in $C_f$ (one may cite Day's reflection theorem here) and verify easily that $C \leadsto C_f$ is the localization in the context of cocomplete symmetric monoidal categories. One can also verify that if $\mathcal{L}$ is a line bundle on a scheme $X$ and $f \in \Gamma(X,\mathcal{L})$ is a global section, then we really have $\mathrm{Qcoh}(X)_f = \mathrm{Qcoh}(X_f)$, so this categorical localization is compatible with the scheme theoretic localization. However, for other applications, I need more general morphisms $f$. This motivated my question. I am pretty sure that this should be standard in category theory, therefore the reference request tag. 5 added 845 characters in body I am not interested in a general existence proof or alike; instead I would like to exhibit $C_f$ as an explicit full reflective subcategory of $C$, thereby also showing its existence. The reason is that I want to actually compute something in these localizations which a priori does not simply follow from the universal property. There is a general construction of localizations a localization of a plain category with respect to arbitrary sets of morphisms, which can be found on page 6 of the book by Gabriel-ZismanGabriel-Zisman's Calculus of fractions and homotopy theory. Thus, an object of the localization $C_f$ as a plain category is an object of $C$, and a morphism is a class of a finite sequence of the form $f_1 s^{-1} f_2 \cdots s^{-1} f_n$ (perhaps without $f_1$ or $f_n$), where the sources and targets should fit, subject to the obvious cancellation rules. But this might not define a (small) set of morphisms, right? I know the basics about left/right multiplicative systems (as in Gabriel-Zisman, Weibel, Kashiwara-Schapira, etc.), saturations etc., but I could not find any answer to this question in the literature. Question 2 EDIT: As Theo points out, there is no set-theoretic problem if we localize a category at just one single morphism. Which conditions have But when $C$ is monoidal, there is no reason why the tensor product $C \times C \to be imposed on C$ extends to a tensor product $C_f \times C_f \to C_f$, because this means that for every $x \in C$ and the invertibility of $f$ so that forces the invertibility of $C_f$ becomes symmetric monoidal x \otimes f$and$C f \leadsto C_f$satisfies otimes x$ in the universal property with respect to symmetric monoidal language of categoriesstated above? According to , which is unplausible. Instead we should better localize at all morphisms $x \otimes f$ and $f \otimes x$, where $x$ runs through all objects of $C$; this is a monoidal class in the language of Day's paper "A Note on Monoidal Localisation"perhaps we should demand that some right multiplicative system associated . But now there are set-theoretic problems in the description of $C_f$ above. Question 2. Which conditions have to be imposed on $C$ and $f$ is so that we can write down explicitly the localization $C_f$ in the $2$-category of symmetric monoidal , but I don't know how to make this explicit.categories? How does it look like? 4 added 192 characters in body Let $C$ be a symmetric monoidal category, and $f : x \to y$ be a morphism in $C$. I would like to construct the localization $C_f$ explicitly, which solves the universal property $$\mathrm{Hom}_{\otimes}(C_f,D) = \{F \in \mathrm{Hom}_{\otimes}(C,D) : F(f) \text{ iso}\}.$$ I am not interested in a general existence proof or alike; instead I would like to exhibit $C_f$ as an explicit full reflective subcategory of $C$, thereby also showing its existence. There is a general construction of localizations with respect to arbitrary sets of morphisms, which can be found on page 6 of the book by Gabriel-Zisman. Thus, an object of the localization $C_f$ as a plain category is an object of $C$, and a morphism is a class of a finite sequence of the form $f_1 s^{-1} f_2 \cdots s^{-1} f_n$ (perhaps without $f_1$ or $f_n$), where the sources and targets should fit, subject to the obvious cancellation rules. But this might not define a (small) set of morphisms, right? Question 1. Which conditions have to be imposed on $C$ and $f$ so that $C_f$ exists (without leaving the universe)? I know the basics about left/right multiplicative systems, saturations etc., but I could not find any answer to this question in the literature. Question 2. Which conditions have to be imposed on $C$ and $f$ so that $C_f$ becomes symmetric monoidal and $C \leadsto C_f$ satisfies the universal property with respect to symmetric monoidal categories stated above? According to Day's paper "A Note on Monoidal Localisation" perhaps we should demand that some right multiplicative system associated to $f$ is monoidal, but I don't know how to make this explicit. Question 3. Actually I am interested in cocomplete symmetric monoidal categories and cocontinuous symmetric monoidal functors between them. How does the localization look like in this context? Let me mention a special case where everything works out: Let $\mathcal{L} \in C$ be an object whose symmetry $\mathcal{L}^{\otimes 2} \to \mathcal{L}^{\otimes 2}$ is the identity and $f : 1_C \to \mathcal{L}$ a morphism (imagine a global section of a line bundle on a scheme). Let $C_f \subseteq C$ the full subcategory consisting of those $M \in C$ such that $M \otimes f : M \to M \otimes \mathcal{L}$ is an isomorphism. If $C$ is cocomplete, the inclusion $C_f \subseteq C$ has a left adjoint: It maps $M \in C$ to the colimit of $M \to M \otimes \mathcal{L} \to M \otimes \mathcal{L}^{\otimes 2} \to \dotsc$. Using this left adjoint, one can define tensor products and colimits in $C_f$ (one may cite Day's reflection theorem here) and verify easily that $C \leadsto C_f$ is the localization in the context of cocomplete symmetric monoidal categories. One can also verify that if $\mathcal{L}$ is a line bundle on a scheme $X$ and $f \in \Gamma(X,\mathcal{L})$ is a global section, then we really have $\mathrm{Qcoh}(X)_f = \mathrm{Qcoh}(X_f)$, so this categorical localization is compatible with the scheme theoretic localization. However, for other applications, I need more general morphisms $f$. This motivated my question. I am pretty sure that this should be standard in category theory, therefore the reference request tag. 3 deleted 53 characters in body; deleted 2 characters in body; added 79 characters in body Let $C$ be a symmetric monoidal category, and $f : x \to y$ be a morphism in $C$. I would like to construct the localization $C_f$ explicitly, which solves the universal property $$\mathrm{Hom}_{\otimes}(C_f,D) = \{F \in \mathrm{Hom}_{\otimes}(C,D) : F(f) \text{ iso}\}.$$ There is a general construction of localizations with respect to arbitrary sets of morphisms, which can be found on page 6 of the book by Gabriel-Zisman. Thus, an object of the localization $C_f$ as a plain category is an object of $C$, and a morphism is a class of a finite sequence of the form $f_1 s^{-1} f_2 \cdots s^{-1} f_n$ (perhaps without $f_1$ or $f_n$), where the sources and targets should fit, subject to the obvious cancellation rules. But this might not define a (small) set of morphisms, right? Question 1. Which conditions have to be imposed on $C$ and $f$ so that $C_f$ exists (without leaving the universe)? I know the basics about left/right multiplicative systems, saturations etc., but I could not find any answer to this question in the literature. Question 2. Which conditions have to be imposed on $C$ and $f$ so that $C_f$ becomes symmetric monoidal and $C \leadsto C_f$ satisfies the universal property with respect to symmetric monoidal categories stated above? According to Day's paper "A Note on Monoidal LocalizationLocalisation" perhaps we should demand that some right multiplicative system associated to $f$ is monoidal, but I don't know how to make this explicit. Question 3. Actually I am interested in cocomplete symmetric monoidal categories and cococontinuouscocontinuous symmetric monoidal functors between them. How does the localization look like in this context? Let me mention a special case where everything works out: Let $\mathcal{L} \in C$ be an object whose symmetry $\mathcal{L}^{\otimes 2} \to \mathcal{L}^{\otimes 2}$ is the identity and $f : 1_C \to \mathcal{L}$ a morphism (imagine a global section of a line bundle on a scheme). Let $C_f \subseteq C$ the full subcategory consisting of those $M \in C$ such that $M \otimes f : M \to M \otimes \mathcal{L}$ is an isomorphism. If $C$ is cocomplete, the inclusion $C_f \subseteq C$ has a left adjoint: It maps $M \in C$ to the colimit of $M \to M \otimes \mathcal{L} \to M \otimes \mathcal{L}^{\otimes 2} \to \dotsc$. Using this left adjoint, one can define tensor products and colimits in $C_f$ (one may cite Day's reflection theorem here) and verify easily that $C \leadsto C_f$ is the localization in the context of cocomplete symmetric monoidal categories. One can also verify that if $\mathcal{L}$ is a line bundle on a scheme $X$ and $f \in \Gamma(X,\mathcal{L})$ is a global section, then we really have $\mathrm{Qcoh}(X)_f = \mathrm{Qcoh}(X_f)$, so this categorical localization is compatible with the scheme theoretic localization. However, for other applications, I need more general morphisms $f$. This motivated my question. I am pretty sure that this should be standard in category theory, therefore the reference request tag. 2 Fixed a typo, corrected name of a reference Let $C$ be a symmetric monoidal category, and $f : x \to y$ be a morphism in $C$. I would like to construct the localization $C_f$ explicitly, which solves the universal property $$\mathrm{Hom}_{\otimes}(C_f,D) = \{F \in \mathrm{Hom}_{\otimes}(C,D) : F(f) \text{ iso}\}.$$ There is a general construction of localizations with respect to arbitrary sets of morphisms, which can be found on page 6 of the book by Gabriel-Zisman. Thus, an object of the localization $C_f$ as a plain category is an object of $C$, and a morphism is a class of a finite sequence of the form $f_1 s^{-1} f_2 \cdots s^{-1} f_n$ (perhaps without $f_1$ or $f_n$), where the sources and targets should fit, subject to the obvious cancellation rules. But this might not define a (small) set of morphisms, right? Question 1. Which conditions have to be imposed on $C$ and $f$ so that $C_f$ exists (without leaving the universe)? I know the basics about left/right multiplicative systems, saturations etc., but I could not find any answer to this question in the literature. Question 2. Wich Which conditions have to be imposed on $C$ and $f$ so that $C_f$ becomes symmetric monoidal and $C \leadsto C_f$ satisfies the universal property with respect to symmetric monoidal categories stated above? According to Day's paper "Localization in A Note on Monoidal CategoriesLocalization" perhaps we should demand that some right multiplicative system associated to $f$ is monoidal, but I don't know how to make this explicit. Question 3. Actually I am interested in cocomplete symmetric monoidal categories and cococontinuous symmetric monoidal functors between them. How does the localization look like in this context? Let me mention a special case where everything works out: Let $\mathcal{L} \in C$ be an object whose symmetry $\mathcal{L}^{\otimes 2} \to \mathcal{L}^{\otimes 2}$ is the identity and $f : 1_C \to \mathcal{L}$ a morphism (imagine a global section of a line bundle on a scheme). Let $C_f \subseteq C$ the full subcategory consisting of those $M \in C$ such that $M \otimes f : M \to M \otimes \mathcal{L}$ is an isomorphism. If $C$ is cocomplete, the inclusion $C_f \subseteq C$ has a left adjoint: It maps $M \in C$ to the colimit of $M \to M \otimes \mathcal{L} \to M \otimes \mathcal{L}^{\otimes 2} \to \dotsc$. Using this left adjoint, one can define tensor products and colimits in $C_f$ (one may cite Day's reflection theorem here) and verify easily that $C \leadsto C_f$ is the localization in the context of cocomplete symmetric monoidal categories. One can also verify that if $\mathcal{L}$ is a line bundle on a scheme $X$ and $f \in \Gamma(X,\mathcal{L})$ is a global section, then we really have $\mathrm{Qcoh}(X)_f = \mathrm{Qcoh}(X_f)$, so this categorical localization is compatible with the scheme theoretic localization. However, for other applications, I need more general morphisms $f$. This motivated my question. I am pretty sure that this should be standard in category theory, therefore the reference request tag. 1 # Localization of a symmetric monoidal category at a single morphism Let $C$ be a symmetric monoidal category, and $f : x \to y$ be a morphism in $C$. I would like to construct the localization $C_f$ explicitly, which solves the universal property $$\mathrm{Hom}_{\otimes}(C_f,D) = \{F \in \mathrm{Hom}_{\otimes}(C,D) : F(f) \text{ iso}\}.$$ There is a general construction of localizations with respect to arbitrary sets of morphisms, which can be found on page 6 of the book by Gabriel-Zisman. Thus, an object of the localization $C_f$ as a plain category is an object of $C$, and a morphism is a class of a finite sequence of the form $f_1 s^{-1} f_2 \cdots s^{-1} f_n$ (perhaps without $f_1$ or $f_n$), where the sources and targets should fit, subject to the obvious cancellation rules. But this might not define a (small) set of morphisms, right? Question 1. Which conditions have to be imposed on $C$ and $f$ so that $C_f$ exists (without leaving the universe)? I know the basics about left/right multiplicative systems, saturations etc., but I could not find any answer to this question in the literature. Question 2. Wich conditions have to be imposed on $C$ and $f$ so that $C_f$ becomes symmetric monoidal and $C \leadsto C_f$ satisfies the universal property with respect to symmetric monoidal categories stated above? According to Day's paper "Localization in Monoidal Categories" perhaps we should demand that some right multiplicative system associated to $f$ is monoidal, but I don't know how to make this explicit. Question 3. Actually I am interested in cocomplete symmetric monoidal categories and cococontinuous symmetric monoidal functors between them. How does the localization look like in this context? Let me mention a special case where everything works out: Let $\mathcal{L} \in C$ be an object whose symmetry $\mathcal{L}^{\otimes 2} \to \mathcal{L}^{\otimes 2}$ is the identity and $f : 1_C \to \mathcal{L}$ a morphism (imagine a global section of a line bundle on a scheme). Let $C_f \subseteq C$ the full subcategory consisting of those $M \in C$ such that $M \otimes f : M \to M \otimes \mathcal{L}$ is an isomorphism. If $C$ is cocomplete, the inclusion $C_f \subseteq C$ has a left adjoint: It maps $M \in C$ to the colimit of $M \to M \otimes \mathcal{L} \to M \otimes \mathcal{L}^{\otimes 2} \to \dotsc$. Using this left adjoint, one can define tensor products and colimits in $C_f$ (one may cite Day's reflection theorem here) and verify easily that $C \leadsto C_f$ is the localization in the context of cocomplete symmetric monoidal categories. One can also verify that if $\mathcal{L}$ is a line bundle on a scheme $X$ and $f \in \Gamma(X,\mathcal{L})$ is a global section, then we really have $\mathrm{Qcoh}(X)_f = \mathrm{Qcoh}(X_f)$, so this categorical localization is compatible with the scheme theoretic localization. However, for other applications, I need more general morphisms $f$. This motivated my question. I am pretty sure that this should be standard in category theory, therefore the reference request tag.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 211, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271553158760071, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/3583/trying-to-prove-that-x-sin-frac-pix-ge-pi-cos-frac-pix-for-x-ge/3647
Trying to prove that $x\sin(\frac{\pi}{x})\ge\pi \cos(\frac{\pi}{x})$ for $x\ge 1$ Consider the function ````f[x_] := x Sin[Pi/x] ```` I want to prove that this function is increasing for $x\ge 1$. This can be done with the first derivative. We have to prove that $f'[x]\ge 0$ for $x\ge 1$. First I tried ````FullSimplify[f'[x] >= 0, x >= 1] (*Out: x Sin[\[Pi]/x] >= \[Pi] Cos[\[Pi]/x] *) ```` Then I realized that Mathematica probably doesn't know about the inequality $\tan(x)>x$ for $0<x<\pi/2$. So I tried: ````FullSimplify[f'[x] >= 0, x >= 1 && ForAll[y, 0 < y < Pi/2, Tan[y] > y]] (*Out: x Sin[\[Pi]/x] >= \[Pi] Cos[\[Pi]/x] *) ```` This inequality seems to be easy, I tried replacing `x >= 1` with `x >= Pi` and I get the same result. How can I properly use Mathematica to prove some theorems like this one? EDIT: The main point of the problem is not trying to prove that the function is increasing (that is the motivation of the problem) , the main point is trying to prove that $x\sin(\frac{\pi}{x})\ge\pi \cos(\frac{\pi}{x})$ for $x\ge 1$ using that $\tan(x)>x$ for $0<x<\pi/2$, i.e. i want to use the proving features of Mathematica. - 1 If you don't time for a full proof, then plotting is a very practical solution. It removes all doubt, and can even help you see how to obtain the proof. – becko Apr 16 at 4:51 5 Answers The proof of the original statement that $f(x)\equiv x\sin\frac{\pi}{x}$ is a monotonically increasing function of $x$ for $x>1$ can be done as follows: First, we show that the second derivative $f''(x)$ of the function is negative: ````Simplify[D[x Sin[\[Pi]/x], x, x] < 0, Assumptions -> x > 1] ```` True This means that the first derivative $f'(x)$ is a monotonically decreasing function of $x$ for $x>1$. Now we show that the derivative of the function approaches zero as $x\to\infty$: ````Limit[D[x Sin[\[Pi]/x], x], x -> \[Infinity]] ```` 0 Since the derivative has been shown to be decreasing and to have a limit of zero for $x\to\infty$, it follows that $f'(x) > 0$ for $x>1$. This proves the desired statement about $f(x)$. Edit To take the other route proposed in the edited version of the question, you could do the following: ````Resolve[ForAll[{x}, x > 1 && Tan[\[Pi]/x] >= \[Pi]/x, f'[x] >= 0], Reals] ```` True Edit 2 In the `Resolve` statement above, `ForAll` has three arguments: the variable `{x}`, a condition, and the statement to be proved. In words, this says the following: for all $x$ that satisfy the condition $x>1$ and $\tan(\pi/x)\ge \pi/x$, it holds that $f'(x)\ge0$. Of course, the condition can actually be simplified because the tangent inequality as stated here only holds for $x>2$. To make the condition fully consistent with the desired interval $x>1$, we simply have to replace $x$ by $2 x$ in the tangent inequality. This leaves the inequality unaffected but extends its range of validity to $x>1$. Therefore, we get the following statement that can be fed to Mathematica: ````Resolve[ForAll[x, x > 1 && Tan[\[Pi]/(2 x)] >= \[Pi]/x/2, f'[x] >= 0], Reals] ```` True - 1 Simple, to the point. I've omitted the second-derivative part in my answer, because the inquirer wasn't (IMHO) totally clear about what he was asking. +1 – CHM Mar 28 '12 at 5:12 This is indeed a neat solution to my problem, but my question wasn't too clear, i edited it. – Diego S. Mar 28 '12 at 5:34 @Diego S. I think I understand what you're after. I've added a suggestion for how to use your additional assumption about tan(x) in the proof. – Jens Mar 28 '12 at 7:11 @Jens, +1 well done. – halirutan Mar 28 '12 at 10:10 @Jens You are the only one that understood me. Your approach is good but not fully correct since `Tan[Pi/x]>=Pi/x` is not true for all `x > 1`. I tried ```Resolve[ForAll[x, x > 1 && (0 < x < \[Pi]/2 \[Implies] Tan[x] > x), f'[x] >= 0] ]``` but didn't work. Maybe i'm asking too much from Mathematica. Can you make this work please? – Diego S. Mar 28 '12 at 19:38 show 3 more comments Mathematica often responds well when provided a little expert assistance. Let's focus on techniques that have a wide application rather than just to this problem. 1. Can the function be decomposed into simpler pieces? Yes, obviously: $f(x)$ is the product of $x$ and $\sin{\pi / x}$. Both are obviously increasing for $x \in [1,2]$. After that, $\sin{\pi / x}$ turns around: it decreases while $x$ continues to increase. The challenge is to show that the increase in $x$ overpowers the decrease in the rest of the function. However, perhaps we have accomplished something: we can focus on the cases $x \ge 2$. 2. It can be more difficult, both theoretically and computationally, to determine characteristics of a function on an infinite (or noncompact) set like the interval $[2,\infty)$ than it is to work on a compact (finite) set. From the appearance of $\pi/x$ in $f$ and the infinite limit of its domain, it is both natural and immediate to attempt the change of variable $y = 1/x$. Because this change reverses the direction of $x$, we now wish to show that $g(y) = \sin{\pi y}/y$ is decreasing on the interval $[0, 1/2]$. Let's begin a Mathematica implementation by creating $g$ and its derivative: ````ClearAll[g, dg]; g[y_] := Sin[\[Pi] y] / y; dg[y_] := Evaluate @ D[g[y], y] ```` Why not try the obvious: can we demonstrate that the derivative of $g$ is everywhere negative on this interval simply by requesting its supremum? ````Maximize[{dg[y], 0 < y <= 1/2}, y] ```` The only solution is `y -> 0` (reached in 0.11 seconds). This technique exploits two properties of `Maximize` announced in its documentation: Maximize will return exact results if given exact input. If the maximum is achieved only infinitesimally outside the region defined by the constraints, or only asymptotically, Maximize will return the supremum and the closest specifiable point. Mathematica issues a warning message, because this is a boundary value and in fact `dg` is not defined at $0$, so to be sure, let's check: ```` In[3]:= Limit[dg[y], y -> 0] Out[3] = 0 ```` Whence--because the supremum is $0$ and occurs uniquely at $y=0$--the derivative of $g$ is non-positive throughout $[0,1/2]$ and negative on $(0,1/2]$, QED. This problem can also be solved via Taylor's Theorem (with remainder, not just naive inspection of the coefficients) by computing the power series of $g$ around $0$ out to first order and analyzing the remainder term. It becomes so trivial with this approach that Mathematica's power is unneeded, so I won't pursue the details here. It is noteworthy, though, that this solution is available only for g, not for f (where the expansion would be around $\infty$, which doesn't accomplish anything in this case). More generally, when tackling more complex or recalcitrant problems, try to decompose them into simpler pieces; use mathematical identities to reformulate the subproblems; attack the subproblems in multiple ways (rather than focusing on just one method, such as that based on a tangent identity in this problem); and let mathematical principles guide you. - I have no rigorous training in mathematics - I'm not quite sure what constitutes a proof and what doesn't. How can I properly use Mathematica to prove some theorems like this one? Well, what you can do is use Mathematica to help visualise the function as well as computing derivatives, integrals or limits. My approach is a bit hacky :) The function Okay, so we're dealing with `x Sin[Pi/x]` which is, obviously, a periodic function. It is also even (its plot is symmetric and `x Sin[Pi/x] == -x Sin[Pi/-x]` evaluates to `True`). The fact that the function is even is interesting if you want to generalize your "proof". From the plot of `Sin[x]`, you can get a good idea of what the plot of `x Sin[Pi/x]` will look like, even before asking Mathematica to generate it. The fact that `Sin[x]` has a value between 0 and 1 while $0\lt x \leq \pi$ is obvious from the graph. What is also obvious is that the function's value has negative sign from $\pi \leq x \leq 2\pi$. What are we expecting? We're expecting a sinusoidal with ascending amplitude and frequency (the `x` multiplying `Sin[Pi/x]`). We can guess that because at values of `x` below 1, we will be evaluating the sine of a number larger than $\pi$, this trend being reversed at `x=1`. What happens after `x=1`? Let's ask Mathematica. Plotting the functions It turns out we were right about the amplitude and frequency. Right up to a point, at `x=1` - past this point, there are no more zeros. There's no need to go further, because it is obvious that our function has an asymptote. But what if we had our `PlotRange` wrong, and the (presumably more complicated) function really does have zeros after `x=1`? Let's again ask Mathematica. It turns out that the limit of our function ````Limit[x Sin[Pi/x], x -> \[Infinity]] ```` is $\pi$. Confirmation of asymptotic behaviour. What about $\pi \cos (\frac{\pi}{x})$? You could take a guess like we did before, or simply not bother and ask Mathematica right away. It is clear from this image that the inequality $x \sin (\frac{\pi}{x})\geq \pi \cos (\frac{\pi}{x})$ can be valid for $x \geq 1$. To confirm, we need to check the limit of the cosine function with ````Limit[Pi Cos[Pi/x], x -> \[Infinity]] ```` which also evaluates to $\pi$. Good news. But how can I prove it's increasing? As halirutan said, you can use derivatives. As a non-mathematician, I would be convinced by the plots of the functions, and their limits. If you want to know how fast each function converges to $\pi$, then go ahead and take a look at each function's first/second derivative plot. This approach might not qualify as rigorous, but I think it shows how you can use Mathematica to walk through math problems. - It's 5am here and no, I'm not already up; so be warned. It seems `Reduce` cannot really help here but maybe someone else finds a direct way to do it. In the meantime you could take your derivative `f'[x]` $$\frac{df}{dx} = \sin \left( \frac{\pi }{x} \right) -\frac{\pi \cos \left(\frac{\pi }{x}\right)}{x}$$ $x$ can have values from 1 to infinity and we need to show, that the above expression is always positive. Keeping in mind the interval for $x$ we could make a substitution to simplify the expression ````f'[x] /. x :> Pi/y ```` which leads us to an easier problem to prove $$\sin (y)-y \cos (y)>0$$ Now you could subtract $\sin(y)$ and then divide by $\cos(y)$. In the division step you have to take care, because remember $y$ runs from $\pi$ (when $x=1$) to 0 (when $x$ goes to infinity) and plotting this range for $\cos(y)$ shows ````Plot[Cos[y], {y, 0, Pi}] ```` that you have to make a case-by-case analysis. For $\pi/2 \leq y \leq \pi$ the cosine is negative and you have to turn the sign of the inequality. Therefore, you have to show $$\tan(y)>y,\quad\quad 0 < y < \pi/2$$ and $$\tan(y)<y,\quad\quad \pi/2 < y \leq \pi$$ You maybe have to think about what happens at $\pi/2$ and I'm sure there are easier ways to solve this, but the tangens inequality caught my eye since you already mentioned it. - What about using Taylor series? ````fp = Normal[Series[f'[1/y], {y, 1, 20}]] /. y -> 1/x FullSimplify[fp >= 0, x >= 1] ```` ````True ```` - 1 You cannot proof something by a finite approximation. In this way I could proof you anything. – halirutan Mar 28 '12 at 3:08 I know it is not rigorous but I think Taylor series are good as an aproximation of results. Ideally you should use the infinite terms in series. – FJRA Mar 28 '12 at 3:45 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422779083251953, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/45300/e2-mc22-pc2-what-units-are-used-to-measure-e-m-c-and-p
# $E^2 = (mc^2)^2 + (pc)^2$: What units are used to measure $E$, $m$, $c$ and $p$? \begin{equation} E^2 = (mc^2)^2 + (pc)^2 \end{equation} If I am using this equation to figure out the energy of something, what units would I use? Would it be the metric system? I.e. kilograms for $m$, meters per second for $p$, kilometers per second for $c$? And what units of measurement are used for $E$? - 1 Use the SI units. Joules for E. – Michael Luciuk Nov 28 '12 at 11:03 1 "meters per second for p" Er...no. Kg m / s in SI, and likewise the velocity there should be in m/s so that you don't have to mess around with loose factors of $10^3$. Of course particle physicists would use $c=1$ units and measure energy, mass and momentum all in GeV. – dmckee♦ Nov 28 '12 at 16:43 ## 1 Answer Any consistent system will do. That's the entire point of systems of units--if you stick to one, you don't need to worry about the units too much. And it never happens that a certain equation only works in a certain system*. In this case, you would use joules ($\:\mathrm{J}\equiv\:\mathrm{kg\:m^2\:s^{-2}}$), the metric unit of energy. If you were using the cgs system, $m$ would be in grams, $p$ would be in $\:\mathrm{g\:cm\:s^{-1}}$, $c$ would be in centimetres per second, and $E$ would be in ergs ($\:\mathrm{erg}\cong\:\mathrm{g\:cm^2\:s^{-2}}$), Physical constants may change. Also, some equations have some constants set to one (eg Planck units, Gaussian units), so they may disappear entirely. For example, if $c=1$ (Planck units), the equation becomes $E^2=m^2+p^2$. - keep in mind that constants of value $1$ are omitted, thus it is not necessarily true that all equations are equally valid in all unit systems - examples are Lorentz–Heaviside vs Gaussian vs SI units of elecromagnetism as well as various systems of natural units – Christoph Nov 28 '12 at 11:26 @Christoph: True... I'll add that. – Manishearth♦ Nov 28 '12 at 11:27 Why are you saying that 1 J is only approximately equal to 1 kg m$^{2}$ s$^{-2}$? – user12345 Nov 28 '12 at 12:07 @user16307: No, $\cong$ and $\equiv$ mean "congruent to" or "equivalent". $\sim$ and $\simeq$ are "similar to". – Manishearth♦ Nov 28 '12 at 12:50 1 by (some) convention, $\equiv$ is used for equal by definition and $\cong$ for isomorphic (ie structurally equivalent but not necessarily equal); in your case, $\equiv$ would probably be more appropriate – Christoph Nov 28 '12 at 18:55 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240849018096924, "perplexity_flag": "middle"}
http://0fps.wordpress.com/2012/07/12/smooth-voxel-terrain-part-2/
Mostly geometry ## Smooth Voxel Terrain (Part 2) Posted on July 12, 2012 by Last time we formulated the problem of isosurface extraction and discussed some general approaches at a high level.  Today, we’re going to get very specific and look at meshing in particular. For the sake of concreteness, let us suppose that we have approximated our potential field $f$ by sampling it onto a cubical grid at some fixed resolution.  To get intermediate values, we’ll just interpolate between grid points using the standard trilinear interpolation.  This is like a $C^0$ generalization of Minecraft-style voxel surfaces.  Our goal in this article is to figure out how to extract a mesh of the implicit surface (or zero-crossings of $f$).  In particular, we’re going to look at three different approaches to this problem: ## Marching Cubes By far the most famous method for extracting isosurfaces is the marching cubes algorithm.  In fact, it is so popular that the term `marching cubes’ is even more popular than the term `isosurface’ (at least according to Google)!   It’s quite a feat when an algorithm becomes more popular than the problem which it solves!  The history behind this method is very interesting.  It was originally published back in SIGGRAPH 87, and then summarily patented by the Lorensen and Cline.  This fact has caused a lot of outrage, and is been widely cited as one of the classic examples of patents hampering innovation.  Fortunately, the patent on marching cubes expired back in 2005 and so today you can freely use this algorithm in the US with no fear of litigation. Much of the popularity of marching cubes today is due in no small part to a famous article written by Paul Bourke.  Back in 1994 he made a webpage called “Polygonizing a Scalar Field”, which presented a short, self-contained reference implementation of marching cubes (derived from some earlier work by Cory Gene Bloyd.)  That tiny snippet of a C program is possibly the most copy-pasted code of all time.  I have seen some variation of Bloyd/Bourke’s code in every implementation of marching cubes that I’ve ever looked at, without exception.  There are at least a couple of reasons for this: 1. Paul Bourke’s exposition is really good.  Even today, with many articles and tutorials written on the technique, none of them seem to explain it quite as well.  (And I don’t have any delusions that I will do any better!) 2. Also their implementation is very small and fast.  It uses some clever tricks like a precalculated edge table to speed up vertex generation.  It is difficult to think of any non-trivial way to improve upon it. 3. Finally, marching cubes is incredibly difficult to code from scratch. This last point needs some explaining,  Conceptually, marching cubes is rather simple.  What it does is sample the implicit function along a grid, and then checks the sign of the potential function at each point (either +/-).  Then, for every edge of the cube with a sign change, it finds the point where this edge intersects the volume and adds a vertex (this is just like ray casting a bunch of tiny little segments between each pair of grid points).  The hard part is figuring out how to stitch some surface between these intersection points.  Up to the position of the zero crossings, there are $2^8 = 256$ different possibilities, each of which is determined by the sign of the function at the 8 vertices of the cube: Some of the marching cubes special cases.  (c) Wikipedia, created by Jean-Marie Favreau. Even worse, some of these cases are ambiguous!  The only way to resolve this is to somewhat arbitrarily break the symmetry of the table based on a case-by-case analysis. What a mess!  Fortunately, if you just download Bloyd/Bourke’s code, then you don’t have to worry about any of this and everything will just work.  No wonder it gets used so much! ## Marching Tetrahedra Both the importance of isosurface extraction and the perceived shortcomings of marching cubes motivated the search for alternatives.  One of the most popular was the marching tetrahedra, introduced by Doi and Koide.  Besides the historical advantage that marching tetrahedra was not patented, it does have a few technical benefits: 1. Marching tetrahedra does not have ambiguous topology, unlike marching cubes.  As a result, surfaces produced by marching tetrahedra are always manifold. 2. The amount of geometry generated per tetrahedra is much smaller, which might make it more suitable for use in say a geometry shader. 3. Finally, marching tetrahedra has only $2^4 = 16$ cases, a number which can be further reduced to just 3 special cases by symmetry considerations.  This is enough that you can work them out by hand. Exercise:  Try working out the cases for marching tetrahedra yourself.  (It is really not bad.) The general idea behind marching tetrahedra is the same as marching cubes, only it uses a tetrahedral subdivision.  Again, the standard reference for practical implementation is Paul Bourke (same page as before, just scroll down a bit.)  While there is a lot to like about marching tetrahedra, it does have some draw backs.  In particular, the meshes you get from marching tetrahedra are typically about 4x larger than marching cubes.  This makes both the algorithm and rendering about 4x slower.  If your main consideration is performance, you may be better off using a cubical method.  On the other hand, if you really need a manifold mesh, then marching tetrahedra could be a good option.  The other nice thing is that if you are obstinate and like to code everything yourself, then marching tetrahedra may be easier since there aren’t too many cases to check. ## The Primal/Dual Classification By now, both marching cubes and tetrahedra are quite old.  However, research into isosurface extraction hardly stopped in the 1980s.  In the intervening years, many new techniques have been developed.  One general class of methods which has proven very effective are the so-called `dual’ schemes.  The first dual method, surface nets, was proposed by Sarah Frisken Gibson in 1999: S.F. Gibson, (1999) “Constrained Elastic Surface Nets“  Mitsubishi Electric Research Labs, Technical Report. The main distinction between dual and primal methods (like marching cubes) is the way they generate surface topology.  In both algorithms, we start with the same input: a volumetric mesh determined by our samples, which I shall take the liberty of calling a sample complex for lack of a better term.  If you’ve never heard of the word cell complex before, you can think of it as an n-dimensional generalization of a triangular mesh, where the `cells’ or facets don’t have to be simplices. In the sample complex, vertices (or 0-cells) correspond to the sample points; edges (1-cells) correspond to pairs of nearby samples; faces (2-cells) bound edges and so on: Here is an illustration of such a complex.  I’ve drawn the vertices where the potential function is negative black, and the ones where it is positive white. Both primal and dual methods walk over the sample complex, looking for those cells which cross the 0-level of the potential function.  In the above illustration, this would include the following faces: ### Primal Methods Primal methods, like marching cubes, try to turn the cells crossing the bounary into an isosurface using the following recipe: • Edges crossing the boundary become vertices in the isosurface mesh. • Faces crossing the boundary become edges in the isosurface mesh. • … • n-cells crossing the boundary become (n-1)-cells in the isosurface mesh. One way to construct a primal mesh for our sample complex would be the following: This is pretty nice because it is easy to find intersection points along edges.  Of course, there is some topological ambiguity in this construction.  For non-simplicial cells crossing the boundary it is not always clear how you would glue the cells together: As we have seen, these ambiguities lead to exponentially many special cases, and are generally a huge pain to deal with. ### Dual Methods Dual methods on the other hand use a very different topology for the surface mesh.  Like primal methods, they only consider the cells which intersect the boundary, but the rule they use to construct surface cells is very different: • For every edge crossing the boundary, create an (n-1) cell.  (Face in 3D) • For every face crossing the boundary, create an (n-2) cell. (Edge in 3D) • … • For every d-dimensional cell, create an (n-d) cell. • … • For every n-cell, create a vertex. This creates a much simpler topological structure: The nice thing about this construction is that unlike primal methods, the topology of the dual isosurface mesh is completely determined by the sample complex (so there are no ambiguities).  The disadvantage is that you may sometimes get non-manifold vertices: ## Make Your Own Dual Scheme To create your own dual method, you just have to specify two things: 1. A sample complex. 2. And a rule to assign vertices to every n-cell intersecting the boundary. The second item is the tricky part, and much of the research into dual methods has focused on exploring the possibilities.  It is interesting to note that this is the opposite of primal methods, where finding vertices was pretty easy, but gluing them together consistently turned out to be quite hard. ### Surface Nets Here’s a neat puzzle: what happens if we apply the dual recipe to a regular, cubical grid (like we did in marching cubes)?  Well, it turns out that you get the same boxy, cubical meshes that you’d make in a Minecraft game (topologically speaking)! Left: A dual mesh with vertex positions snapped to integer coordinates.  Right: A dual mesh with smoothed vertex positions. So if you know how to generate Minecraft meshes, then you already know how to make smooth shapes!  All you have to do is squish your vertices down onto the isosurface somehow.  How cool is that? This technique is called “surface nets” (remember when we mentioned them before?)  Of course the trick is to figure out where you place the vertices.  In Gibson’s original paper, she formulated the process of vertex placement as a type of global energy minimization and applied it to arbitrary smooth functions.  Starting with some initial guess for the point on the surface (usually just the center of the box), her idea is to perturb it (using gradient descent) until it eventually hits the surface somewhere.  She also adds a spring energy term to keep the surface nice and globally smooth.  While this idea sounds pretty good in theory, in practice it can be a bit slow, and getting the balance between the energy terms just right is not always so easy. ### Naive Surface Nets Of course we can often do much better if we make a few assumptions about our functions.  Remember how I said at the beginning that we were going to suppose that we approximated $f$ by trilinear filtering?  Well, we can exploit this fact to derive an optimal placement of the vertex in each cell — without having to do any iterative root finding!  In fact, if we expand out the definition of a trilinear filtered function, then we can see that the 0-set is always a hyperboloid.  This suggests that if we are looking for a 0-crossings, then a good candidate would be to just pick the vertex of the hyperboloid. Unfortunately, calculating this can be a bit of a pain, so let’s do something even simpler: Rather than finding the optimal vertex, let’s just compute the edge crossings (like we did in marching cubes) and then take their center of mass as the vertex for each cube.  Surprisingly, this works pretty well, and the mesh you get from this process looks similar to marching cubes, only with fewer vertices and faces.  Here is a side-by-side comparison: Left: Marching cubes.  Right: Naive surface nets. Another advantage of this method is that it is really easy to code (just like the naive/culling algorithm for generating Minecraft meshes.)  I’ve not seen this technique published or mentioned before (probably because it is too trivial), but I have no doubt someone else has already thought of it.  Perhaps one of you readers knows a citation or some place where it is being used in practice?  Anyway, feel free to steal this idea or use it in your own projects.  I’ve also got a javascript implementation that you can take a look at. ### Dual Contouring Say you aren’t happy with a mesh that is bevelled.  Maybe you want sharp features in your surface, or maybe you just want some more rigorous way to place vertices.  Well my friend, then you should take a look at dual contouring: T. Ju, F. Losasso, S. Schaefer, and J. Warren.  (2004)  “Dual Contouring of Hermite Data“  SIGGRAPH 2004 Dual contouring is a very clever solution to the problem of where to place vertices within a dual mesh.  However, it makes a very big assumption.  In order to use dual contouring you need to know not only the value of the potential function but also its gradient!  That is, for each edge you must compute the point of intersection AND a normal direction.  But if you know this much, then it is possible to reformulate the problem of finding a nice vertex as a type of linear least squares problem.  This technique produces very high quality meshes that can preserve sharp features.  As far as I know, it is still one of the best methods for generating high quality meshes from potential fields. Of course there are some downsides.  The first problem is that you need to have Hermite data, and recovering this from an arbitrary function requires using either numerical differentiation or applying some clunky automatic differentiator.  These tools are nice in theory, but can be difficult to use in practice (especially for things like noise functions or interpolated data).  The second issue is that solving an overdetermined linear least squares problem is much more expensive than taking a few floating point reciprocals, and is also more prone to blowing up unexpectedly when you run out of precision.  There is some discussion in the paper about how to manage these issues, but it can become very tricky.  As a result, I did not get around to implementng this method in javascript (maybe later, once I find a good linear least squares solver…) ## Demo As usual, I made a WebGL widget to try all this stuff out (caution: this one is a bit browser heavy): ### Click here to try the demo in your browser! This tool box lets you compare marching cubes/tetrahedra and the (naive) surface nets that I described above.  The Perlin noise examples use the javascript code written by Kas Thomas.  Both the marching cubes and marching tetrahedra algorithms are direct ports of Bloyd/Bourke’s C implementation.  Here are some side-by-side comparisons. #### Left-to-right:  Marching Cubes (MC), Marching Tetrahedra (MT), Surface Nets (SN) MC: 15268 verts, 7638 faces. MT: 58580 verts, 17671 faces. SN: 3816 verts, 3701 faces. MC: 1140 verts, 572 faces.  MT: 4200 verts, 1272 faces. SN: 272 verts, 270 faces. MC: 80520 verts, 40276 faces. MT: 302744 verts, 91676 faces. SN: 20122 verts, 20130 faces. MC: 172705 verts, 88071 faces. MT: 639522 verts, 192966 faces. SN: 41888 verts, 40995 faces. A few general notes: • The controls are left mouse to rotate, right mouse to pan, and middle mouse to zoom.  I have no idea how this works on Macs. • I decided to try something different this time and put a little timing widget so you can see how long each algorithm takes.  Of course you really need to be skeptical of those numbers, since it is running in the browser and timings can fluctuate quite randomly depending on totally arbitrary outside forces.  However, it does help you get something of a feel for the relative performance of each method. • In the marching tetrahedra example there are frequently many black triangles.  I’m not sure if this is because there is a bug in my port, or if it is a problem in three.js.  It seems like the issue might be related to the fact that my implementation mixes quads and triangles in the mesh, and that three.js does not handle this situation very well. • I also didn’t implement dual contouring.  It isn’t that much different than surface nets, but in order to make it work you need to get Hermite data and solve some linear least squares problems, which is hard to do in Javascript due to lack of tools. ## Benchmarks To compare the relative performance of each method, I adapted the experimental protocol described in my previous post.  As before, I tested the experiments on a sample sinusoid, varying the frequency over time.  That is, I generated a volume $65^3$ volume plot of $\sin( \frac{n \pi}{2} x ) + \sin( \frac{n \pi}{2} y ) + \sin( \frac{n \pi}{2} z )$ Over the range $[ - \frac{\pi}{2}, + \frac{\pi}{2} ]^3$.  Here are the timings I got, measured in milliseconds | | | | | |-----------|----------------|---------------------|--------------| | Frequency | Marching Cubes | Marching Tetrahedra | Surface Nets | | 0 | 29.93 | 57 | 24.06 | | 1 | 43.62 | 171 | 29.42 | | 2 | 61.48 | 250 | 37.78 | | 3 | 93.31 | 392 | 47.72 | | 4 | 138.2 | 510 | 51.36 | | 5 | 145.8 | 620 | 74.54 | | 6 | 186 | 784 | 83.99 | | 7 | 213.2 | 922 | 97.34 | | 8 | 255.9 | 1070 | 112.4 | | 9 | 272.1 | 1220 | 109.2 | | 10 | 274.6 | 1420 | 124.3 | By far marching tetrahedra is the slowest method, mostly on account of it generating an order of magnitude more triangles.  Marching cubes on the other hand, despite generating nearly 2x as many primitives was still pretty quick.  For small geometries both marching cubes and surface nets perform comparably.  However, as the isosurfaces become more complicated, eventually surface nets win just on account of creating fewer primitives.  Of course this is a bit like comparing apples-to-oranges, since marching cubes generates triangles while surface nets generate quads, but even so surface nets still produce slightly less than half as many facets on the benchmark.  To see how they stack up, here is a side-by-side comparison of the number of primitives each method generates for the benchmark: | | | | | |-----------|----------------|---------------------|--------------| | Frequency | Marching Cubes | Marching Tetrahedra | Surface Nets | | 0 | 0 | 0 | 0 | | 1 | 15520 | 42701 | 7569 | | 2 | 30512 | 65071 | 14513 | | 3 | 46548 | 102805 | 22695 | | 4 | 61204 | 130840 | 29132 | | 5 | 77504 | 167781 | 37749 | | 6 | 92224 | 197603 | 43861 | | 7 | 108484 | 233265 | 52755 | | 8 | 122576 | 263474 | 58304 | | 9 | 139440 | 298725 | 67665 | | 10 | 154168 | 329083 | 73133 | ## Conclusion Each of the isosurface extraction methods has their relative strengths and weaknesses.  Marching cubes is nice on account of the free and easily usable implementations, and it is also pretty fast.  (Not to mention it is also the most widely known.)  Marching tetrahedra solves some issues with marching cubes at the expense of being much slower and creating far larger meshes.  On the other hand surface nets are much faster and can be extended to generate high quality meshes using more sophisticated vertex selection algorithms.  It is also easy to implement and produces slightly smaller meshes.  The only downside is that it can create non-manifold vertices, which may be a problem for some applications.  I unfortunately never got around to properly implementing dual contouring, mostly because I’d like to avoid having to write a robust linear least squares solver in javascript.  If any of you readers wants to take up the challenge, I’d be interested to see what results you get. ### PS I’ve been messing around with the wordpress theme a lot lately.  For whatever reason, it seems like the old one I was using would continually crash Chrome.  I’ve been trying to find something nice and minimalist.  Hopefully this one works out ok. ### Like this: This entry was posted in Mathematics, Programming, Voxels. Bookmark the permalink. ### 12 Responses to Smooth Voxel Terrain (Part 2) 1. jones1618 says: Thanks for your outstanding survey of algorithms and excellent demo. Like you said, Marching Cubes has become so entrenched no one bothers thinking about alternatives. Question: Since Marching Tetrahedrons produces smooth meshes and avoids non-manifold surfaces, why can’t you just post-process the resulting mesh with a fast and conservative mesh reduction algorithm like Stan Melax’s http://dev.gameres.com/program/visual/3d/PolygonReduction.pdf (PDF)? It seems like that would create a near optimal mesh while keeping the polygon count low. 2. Carlos says: jones1618: You could do that, but the naive two-step approach tends to give you a very large intermediate result, which is unfortunate because you’re just throwing most of the triangles away anyway. Worse yet, the intermediate isosurface will generally grow as the inverse-square of the smallest detail you want to capture. A better technique has been presented by Attali, Cohen-Steiner and Edelsbrunner, where the idea is to carefully run both algorithms at once: ftp://ftp-sop.inria.fr/geometrica/dcohen/Papers/sgp_he_da.pdf • mikolalysenko says: That’s a really neat idea! I am working on writing up something about level of detail, but I don’t know when it will be ready yet (might take a week or two). I’ll probably use this reference, if you don’t mind. 3. Cory Bloyd says: It certainly has been fun watching that tiny snippet of code travel the world for so many years. Great article and great demo! • mikolalysenko says: Whoa! The legend himself! I’m speechless. Thank you for commenting! 4. Paul Bourke says: People keep forwarding this to me … nice article. You say “possibly the most copy-pasted code of all time”, not sure about that but I had always thought the following was the most copied/pasted code of mine, at least it has the most contributed language translations. http://paulbourke.net/papers/triangulate/ • mikolalysenko says: I’m shocked! Thank you for the comment! I must say I find your work very inspiring. Perhaps my assessment as the most-copy-pasted was a bit hyperbolic, but there is no denying that is quite widely used. Maybe we could just say that you are the “most copied programmer” in computer graphics? 5. Sean says: Excellent write-up and demo! If you haven’t seen it already, I encourage you to check out Miguel Cepero’s work at Procedural World: http://procworld.blogspot.co.uk/ He uses dual contouring for procedural generation of terrain and architecture, with some impressive results. 6. Jack Pryne says: Great article! I’ve just written my own Marching Cubes routine, (and yes, I too copied Paul Bourke’s code,) but now I’m all fired up to start working on a dual contour method. Thanks for sharing your knowledge; I’ll be reading your further writings. 7. xernobyl says: Would surface nets with a tetrahedron grid instead of cubes generate a better mesh? • mikolalysenko says: The mesh might not necessarily be “better” but it would necessarily be manifold (on the other hand, if you need a manifold mesh, then maybe you would consider it better!). However, since tetrahedral subdivisions are generally more dense, you would probably end up with more vertices/faces to get the same level of accuracy as in a hexahedral subdivision. 8. uelkfr says: Why marching tetrahedra causes stange colored triangles on some spots? http://tinypic.com/r/kud8w/6 http://i48.tinypic.com/kud8w.png • ### Follow Blog via Email Join 102 other followers %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089041352272034, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/55821/the-antipodal-action-on-a-connected-one-dimensional-manifold/55825
## The antipodal action on a connected one dimensional manifold ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) When I am reading one paper, I have met the following statement: It is impossible to define a $Z_{2}\times Z_{2}$ action on a connected closed curve on a compact Riemann surface. The claim is equivalent to say that the existence of two fixed point free involutions on a circle is not true. The author just took this statement as an obvious fact and have used it to prove a lemmer about the antiholomorphic involutions on the compact Riemann surface. It seems the claim is pretty simple, but I cannot find an elegant way to prove this is indeed true. I have tried to use the the following fact to prove it: the one-dimensional manifold can be considered as a circle under the homeomorphism, and it is diffeomorphic to the one dimensional real projective plane, which inherits the automorphism group of $PGL(2, \mathbb{R})$ from $\mathbb{R}^{2}.$ And in $PGL(2, \mathbb{R}),$ the element corresponding to the antipodal map is unique, so the above statement is true. Is this a correct proof? - There must be an additional hypothesis here (for a $\mathbb{Z}_2\times\mathbb{Z}_2$ action on $S^1$, because I can define an action which is trivial on the first summand and acts by antipodal-map ($z \rightarrow -z$) on the 2nd summand – Chris Gerig Feb 18 2011 at 5:30 Notice that there are many, many fixed-point free involutions on a circle... – Mariano Suárez-Alvarez Feb 18 2011 at 5:46 ## 2 Answers Assuming you want fixed-point free actions... Since $\pi_1(S^1)$ is cyclic, all connected coverings of $S^1$ have cyclic group of covering transformations. Now, if $\mathbb Z_2^2$ acted without fixed points on $S^1$, the corresponding quotient map would be a covering $S^1\to S^1$ of group $\mathbb Z_2^2$, which is impossible. Later: a bit less technological: suppose $G=\mathbb Z_2^2$ acts freely on $S^1$ and pick a point $x$. The orbit of $x$ cuts $S^1$ in $4$ segments, and one of them, call it $I$, has $x$ and $\sigma(x)$ as endpoints for some $\sigma\in G\setminus{1}$. But the map $\sigma$ maps $I$ to itself, so by continuity, $\sigma$ fixes a point in $I$, which it didn't. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If you mean the action is a free action: If a group $G$ acts freely on an odd-dimensional sphere $S^{2k-1}$ then $G$ has periodic cohomology of period $2k$. And if $G$ is abelian but not cyclic, then $G$ does $\textit{not}$ have periodic cohomology. Thus your group does not act freely on the circle. [] To see the latter claim: a calculation using the Kunneth formula shows that $H^n(\mathbb{Z}_2\times\mathbb{Z}_2,\mathbb{Z}_2)$ has $\mathbb{Z}_2$-dimension $n+1$ for $n\ge 0$ and hence is not periodic. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280055165290833, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/30881-quick-check.html
# Thread: 1. ## Quick Check: EDIT: NVM, I verified it. /EDIT Suppose you make napkin rings by drilling holes with different diameters through two wooden balls( which also have different diameters). You discover that both napkin rings have the same height h, as shown in the figure. (a) Guess which ring has more wood in it. (b) Check your guess: Use cylindrical shells to compute the volume of a napkin ring created by drilling a hole with radius r through the center of a sphere of radius R and express the answer in terms of h. ----------- For b, I got that they have the same volume. That was my interpretation after finally getting: $h=\left(\frac 6\pi V\right)^{1/3}$ Can anyone tell me if this is correct? If it is, then I don't need any more help, if not, I'll have to post my work. Attached Thumbnails 2. I'm going to bump this, b/c I have to go to class in 4 hours. 3. Originally Posted by angel.white I'm going to bump this, b/c I have to go to class in 4 hours. Watch out, you might get an infraction! Anyways, there is not enough information in this problem. What does each radius of the cylinders equal? To compare volumes of WOOD, take the volume of the sphere and subtract the volume of the cylinder from that. This is an estimate because the cylinder gives extra volume that is cut off by the intersection with the sphere. 4. Originally Posted by colby2152 Watch out, you might get an infraction! Anyways, there is not enough information in this problem. What does each radius of the cylinders equal? To compare volumes of WOOD, take the volume of the sphere and subtract the volume of the cylinder from that. This is an estimate because the cylinder gives extra volume that is cut off by the intersection with the sphere. My expectation is that they didn't give that information, because any sphere of any radius will have the same volume. I decided this, because I got $h=\left(\frac 6\pi V\right)^{1/3}$ as my answer, and if all heights are equal, and everything else is a constant, then all volumes must be equal. But I did a lot of crazy stuff to get that answer, and while I felt confident about all the steps I took, there were many steps involved, and so my confidence in my answer is reduced as there were many oppertunities to make errors. I figured if this answer was correct, other people on this site would already know it, and would be able to easily say "yeah, they all have the same volume" or "no they don't all have the same volume" in which case I will know if my answer was correct, at least, and could go from there. I guess I thought it would be one of those things that knowledgeable people on this site could just look at and know whether it was correct or not without having to do any calculations. 5. Okay, I verified that the answer is correct. I thought about it a bit and realized if I simply chose an h and an R, then I could test it. so I chose h=6, R=9, and got that r=8.49, and V = 113.1 then I chose h=6, R=10, and got that r=9.54, and V = 113.1 So it is correct. I guess I should have looked at that earlier, but I was kind of overwhelmed at the time, studying for midterms, and had a lot of homework in the queue. (Which I got done, w00t).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.977861762046814, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/2531/which-attacks-can-be-avoided-by-the-use-of-ofb-instead-of-ecb
# Which attacks can be avoided by the use of OFB instead of ECB? For a file encryption program, I was told to use Output Feedback mode (OFB) instead of ECB (Electronic code book) mode. Which attacks can I avoid by this choice? - – B-Con May 3 '12 at 18:02 When you are considering using a particular mode, it is generally useful to ponder what security properties you need from the mode, and whether that mode provides those properties. What security properties are you looking for? – poncho May 3 '12 at 19:46 security for encrypted message and encrypted file. – goldroger May 4 '12 at 0:27 @JohnPaulParreño: I edited your question to be valid English, and still ask what you wanted to ask (I hope) ... sorry, I should have done this before, but I somehow missed your question. Please check that this is actually what you want to ask, and feel free to edit again. – Paŭlo Ebermann♦ Jun 5 '12 at 18:16 that's okay, It's more understandable now. – goldroger Jun 5 '12 at 18:22 ## 3 Answers OFB is a mode of operation to ensure confidentiality of messages a) longer than the block size of the encryption algorithm, and b) that can be re-broadcast. The motivation for these kinds of modes it to avoid the weaknesses that come from using plain ECB mode. To be precise, the typical attack on ECB mode involves analyzing the ciphertext and looking for repeated blocks. Repeated ciphertext blocks mean repeated plaintext blocks, and knowing about repeated plaintext can help the attacker analyze the captured ciphertext. (Meaning that ECB is not semantically secure.) In some cases, this is enough for the attacker to learn all, or almost all, of the plaintext. OFB mode prevents this from happening by using randomized encryption to ensure that repeated plaintext within the message does not cause repeated ciphertext. OFB is resistant to ciphertext errors in that if the ciphertext has bits modified in error, only the bits of the plaintext that directly correspond to the ciphertext are modified. Because it OFB mode works like a stream cipher, it has the typical weakness that allows known plaintext to be easily modified by an attacker. (Ie, if OFB mode produces a keystream $K$, then the ciphertext $C$ is defined as $C_i := P_i \oplus K_i$, so if $P_i$ is known to the attacker then so is $K_i$, so they can create a new ciphertext $C'_i := P'_i \oplus K_i$ where $P'_i := P_i \oplus X$ and $X$ is a value chosen to produce the desired modified plaintext $P'_i$.) But since message confidentiality (the goal of OFB) does not encompass message integrity, this is not necessarily a big deal. - ECB mode is a deterministic encryption, instead in OFB if the initial vector is random choosed (and of course published with the cryptogram) is a random encryption. What's the matter with det.enc.? The problem is that if you encoded two time the same message you are going to get two time the same chipertext, so the adversary can understand that you said the same thing twice and she learned something! - Wikipedia has an excellent visual demonstration of the insecurity of ECB mode when applied to (potentially) repetitive data: Here, the first picture on the left shows a simple cartoon image (Tux the Penguin). The second image is the same, but with the (raw, uncompressed RGB) image data encrypted using ECB mode. While details of the image are scrambled, the outline is still clearly visible because identical input blocks (found mainly in the areas with solid color) produce identical output in ECB mode. The third image shows the result of the same encryption process using a different mode that lacks this weakness, such as CBC, CFB, OFB or CTR. Of course, if one used ECB mode to encrypt a compressed image, or some other data that lacks such obvious redundancies, then the patterns in the output would not be so obvious either. Still, many real-world files do contain redundancies — if they didn't, compression programs would be useless — and it's hard to be sure that those cannot be used by an attacker to compromise ECB mode encryption. (To make the other modes secure even if the same key is used for more than one message, one also needs to include a suitable random IV or nonce. Presumably, the IV has been omitted from the example output here to make it match the length of the input.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919074535369873, "perplexity_flag": "middle"}
http://nrich.maths.org/4313/solution
### The Lady or the Lions The King showed the Princess a map of the maze and the Princess was allowed to decide which room she would wait in. She was not allowed to send a copy to her lover who would have to guess which path to follow. Which room should she wait in to give her lover the greatest chance of finding her? ### Master Minding Your partner chooses two beads and places them side by side behind a screen. What is the minimum number of guesses you would need to be sure of guessing the two beads and their positions? ### Flippin' Discs Identical discs are flipped in the air. You win if all of the faces show the same colour. Can you calculate the probability of winning with n discs? # Cosy Corner ##### Stage: 3 Challenge Level: Congratulations to Mikey from the Archbishop of York Junior School who sent in the following solution based on theoretical probability: We started off by thinking of 6 different objects and how many different ways there are of arranging them. This gives 6 $\times$5 $\times$4 $\times$3 $\times$2 $\times$1 = 720 ways of arranging 6 different coloured balls. But 3 of our balls are red - they can be arranged in 3 $\times$2 $\times$1 = 6 different ways that all look the same. (ABC, ACB, BAC, BCA, CBA, CAB) Similarly there are 2 identical blue balls that can be arranged in 2 $\times$1 = 2 different ways that look the same. (AB, BA) So although we have 720 different ways of arranging the balls only so many of them will look different in this question. There are 720 / (6 $\times$2) different looking ways of arranging the balls in this question, giving 60 different looking triangles. At first we thought there was only one way for the reds to all lose and that is for them to be in the middle of each side. But then we realised that the corner 3 balls could be arranged differently with the yellow ball in each of the 3 corners. Hence there are 3 different ways to lose out of 60, or more simply 1 expected loss in every 20, ie 5% loss, 95% win. The online scenario tester supports the 0.95, 95% chance of winning. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525927305221558, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/5921/how-and-why-can-a-decryption-program-tell-me-that-a-key-is-incorrect
# How and why can a decryption program tell me that a key is incorrect? I have noticed that some programs used for file encryption will tell you if an entered key is wrong when you try to decrypt. It seems (to me at least) that this would mean that the key somehow is written into the encrypted file. And the algorithms that I know of will produce an output even if the key is wrong. 1. How does one built into an algorithm this type of validation of a key? Is it just a matter of encrypting the key with the original text? 2. Is this a security problem? - ## 3 Answers First off, many block modes of operation require a message to be padded so that its length is evenly divisible by the block size of the cipher. CBC mode (Cipher Block Chaining), for instance, typically pads a message either with an entire block of zeroes if it happens to be exactly divisible by the block size, or with a given number of bytes that will extend the message to the next block, each of those bytes being set to the number of bytes added as padding (one byte set to 0x01, two bytes set to 0x02, 16 bytes set to 0x10, etc). Now, you have a checksum of sorts built into the message. If the last block of the message, when decrypted, doesn't have valid padding, then the decryption has failed; either the message was corrupted in transit (CBC mode results in a "cascading" of error due to the XORing of the previous ciphertext block with the current plaintext block before encrypting), or the key used to decrypt was incorrect. An application for file encryption that uses some other sort of integrity check, such as mirroring/parity, could verify that the data is OK, and thus the only other explanation is that you used the wrong key. As an aside, a system that can tell you whether a particular ciphertext message was properly padded is known as a "padding oracle", and it is a vulnerability of modes like CBC, because the cipher (initialized by the legitimate user with the proper key) can be fed a series of "chosen ciphertexts" to try to decrypt, each based on combinations of the real ciphertext and some random data, and the behaviors analyzed to reverse-engineer the real plaintext. More advanced cipher modes incorporate a single-purpose message authentication feature into the encryption, which will cause decryption to fail in the same way with either a bad key or a corrupted ciphertext. CCM, which is Counter w/ CBC-MAC, is one of these modes; first, the message is "hashed" by running it through CBC encryption with the given key, but only keeping the last block of the ciphertext (remember that "chaining" of each previous ciphertext into the next block of plaintext, and the cascading error it causes? That's a beautiful way to calculated a "keyed hash" of the message). The message and its MAC are then encrypted again in Counter mode (related to CBC but slightly different; instead of the previous block of ciphertext, a nonce, produced by a combination of the IV and a sequential counter, is combined with each block of plaintext to "salt" it) to produce the ciphertext that is transmitted or persisted. To decrypt, the message is decrypted in Counter mode with the key, then the message portion is hashed in CBC mode with the same key and compared to the decrypted MAC. If the MACs don't match, an error is given. Again, as used in a file encryption application, if there is an independent method of verifying that file integrity is good, the only other explanation is that the wrong key was used. Continuing the aside, the beauty of this mode is that there's no way to turn it against itself as a padding oracle; if a ciphertext has been tampered with, or if it was decrypted with the wrong key, the MACs won't match up, and with that being the test for proper encryption (and thus the error given), decryption failure gives an attacker much less information (pretty much every attempt except one using the correct key and an untampered-with ciphertext will fail with exactly the same error every time). Another similar mode is Galois/Counter Mode or GCM, which has similar behavior but better performance and parallelization due to the use of a faster checksum calculation. - It's not a security problem but a necessary feature. It's not an exact science to distinguish a "good decryption" from a "bad decryption". What if the user had encrypted random data? you would not be able to figure out if the key is correct or not from that sole information, since in both cases the decrypted output would look completely random! Similarly, what if the user typed his key wrong? You want to be able to inform him he made a typo ("invalid password") instead of blindly decrypting garbage and waiting for the user to realize he typed it wrong and try again. What most programs do is store the hash of the key (or an HMAC of the encrypted file, using the encryption key) so that they can verify if the key is correct (or if the file is corrupt - you get free integrity/authentication with the HMAC method) without disclosing the key at all. An attacker would need to find an exact preimage to obtain the actual encryption key, which is designed to be infeasible. In essence, you use a one-way function to store the key's image somewhere in the file, so that anyone without the key cannot invert the function and retrieve the key, but anyone who has the key can easily feed it into the one-way function and compare the result with the value that's in the file. If it matches, it's the correct key! (with very high probability) - You should not store a normal hash of the key, but preferably (as mentioned) encrypt or MAC something with the key which then can checked. – Paŭlo Ebermann♦ Jan 7 at 17:07 @Paŭlo: While storing a plain hash of a low-entropy password would be bad, I'm not aware of any issues with storing a hash of an encryption key properly derived from the password using a key-stretching KDF. If there's something wrong with that that I'm missing, please do let me know. – Ilmari Karonen Jan 10 at 1:11 By using padding, one can tell if the decryption is correct. Padding is used when the message length is not a multiple of the block size. You append predictable data at the end of the message (one "1" followed by several "0" for example) and then you encrypt it. If you find the correct "1000..." sequence at the end of decrypted message, it means it's ok. Check http://en.wikipedia.org/wiki/Padding_(cryptography) It is not a perfect solution however, since symmetric ciphers do not always provide integrity. Even with padding, you should use HMAC to check if somebody messed with your data during transfer. - 2 Note most symmetric padding schemes will fail to provide integrity with probability $\frac{1}{256}$ in the worst case, so it's really just a sanity check and not a substitute to proper integrity checking. – Thomas Jan 11 at 21:09 1 Exactly. Symmetric encryption is for confidentiality only, use MACs for data integrity – Romain Feb 19 at 13:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937655508518219, "perplexity_flag": "middle"}
http://gauravtiwari.org/tag/mathematics/
# MY DIGITAL NOTEBOOK A Personal Blog On Mathematical Sciences and Technology Home » Posts tagged 'mathematics' # Tag Archives: mathematics ## Welcome 2012 – The National Mathematical Year in India Wednesday, December 28th, 2011 08:23 / 18 Comments Srinivasa Ramanujan (Photo credit: Wikipedia) I was very pleased on reading this news that Government of India has decided to celebrate the upcoming year 2012 as the National Mathematical Year. This is 125th birth anniversary of math-wizard Srinivasa Ramanujan (1887-1920). He is one of the greatest mathematicians India ever produced. Well this is ‘not’ the main reason for appointing 2012 as National Mathematical Year as it is only a tribute to him. Main reason is the emptiness of mathematical awareness in Indian Students. First of all there are only a few graduating with Mathematics and second, many not choosing mathematics as a primary subject at primary levels. As mathematics is not a very earning stream, most students want to go for professional courses such as Engineering, Medicine, Business and Management. Remaining graduates who enjoy science, skip through either physical or chemical sciences. Engineering craze has developed the field of Computer Science but not so much in theoretical Computer Science, which is one of the most recommended branches in mathematics. Statistics and Combinatorics are almost ‘died’ in many of Indian Universities and Colleges. No one wants to deal with those brain cracking math-problems: neither students nor professors. Institutes where mathematics is being taught are struggling with the lack of talented lecturers. Talented mathematicians don’t want to teach here since they aren’t getting much money and ordinary lecturers can’t do much more. India is almost ‘zero’ in Mathematics and some people including critics still roar that we discovered ‘zero’, ‘pi’ and we had Ramanujan. (more…) 26.740278 83.888889 ## Claim for a Prime Number Formula Sunday, December 11th, 2011 14:42 / 4 Comments Dr. SMRH Moosavi has claimed that he had derived a general formula for finding the $n$-th prime number. More details can be found here at PrimeNumbersFormula.com and a brief discussion here at Math.SE titled  “Formula for the nth prime number: discovered?” SOME MORE EXCERPTS ARE HERE: General Formula Prime Numbers Formula For $k$-th prime numberGeneral Formula 26.740278 83.888889 ## A Trip to Mathematics: Part-I Logic Tuesday, October 25th, 2011 13:34 / 5 Comments # About “A Trip To Mathematics”: A Trip to Mathematics is an indefinitely long series, aimed on generally interested readers and other undergraduate students. This series will deal Basic Mathematics as well as Advanced Mathematics in very interactive manners. Each post of this series is kept small that reader be able to grasp concepts. Critics and suggestions are invited in form of comments. # What is Logic? If mathematics is regarded as a language, then logic is its grammar. In other words, logical precision has the same importance in mathematics as grammatical accuracy in a language. As linguistic grammar has sentences, statements— logic has them too. Let we discuss about Sentence &Statements, then we shall proceed to further logic . # Sentence & Statements A sentence is a collection of some words, those together having some sense. For example: 1. Math is a tough subject. 2. English is not a tough subject. 3. Math and English both are tough subjects. 4. Either Math or English is tough subject. 5. If Math is a tough subject, then English is also a tough subject. 6. Math is a tough subject, English is a tough subject. Just have a quick look on above collections of words. Those are sentences, since they have some meaning too. First sentence is called Prime Sentence, i.e., . The five words • not • and • or • if …. then • if and only if or their combinations are called ‘connectives‘. The sentences (all but first) are called composite sentences, i.e., a ) in which one or more connectives appear. Remember that there is no difference between a sentence and statement in general logic. In this series, sentences and statements would have the same meaning. # Connectives not: A sentence which is modified by the word “not” is called the negation of the original sentence. For example: “English is not a tough subject” is the negation of “English is a tough subject“. Also, “3 is not a prime” is the negation of “3 is a prime“. Always note that negation doesn’t really mean the converse of a sentence. For example, you can not write “English is a simple subject” as the negation of “English is a tough subject“. In mathematical writings, symbols are often used for conciseness. The negation of sentences/statements is expressed by putting a slash (/) over that symbol which incorporates the principal verb in the statement. For example: The statement $x=y$ (read ‘x is equal to y’) is negated as $x \ne y$ (read ‘x is not equal to y‘). Similarly, $x \notin A$ (read ‘x does not belong to set A‘) is the negation of $x \in A$ (read ‘x belongs to set A‘). Statements are sometimes represented by symbols like p, q, r, s etc. With this notation there is a symbol, $\not$ or ¬ (read as ‘not’) for negation. For example if ‘p’ stands for the statement “Terence Tao is a professor” then $\not p$ [or ¬p] is read as ‘not p’ and states for “Terence Tao is not a professor.” Sometimes ~p is also used for the negation of p. and: The word “and” is used to join two sentences to form a composite sentence which is called the conjunction of the two sentences. For example, the sentence “I am writing, and my sister is reading” is the conjunction of the two sentences: “I am writing” and “My sister is reading“. In ordinary language (English), words like “but, while” are used as approximate synonyms for “and“, however in math, we shall ignore possible differences in shades of meaning which might accompany the use of one in the place of the other. This allows us to write “I am writing but my sister is reading” having the same mathematical meaning as above. The standard notation for conjunction is $\wedge$, read as ‘and‘. If p and q are statements then their conjunction is denoted by $p \wedge q$ and is read as ‘p and q’. or: A sentence formed by connecting two sentences with the word “or” is called the disjunction of the two sentences. For example, “Justin Bieber is a celebrity, or Sachin Tendulkar is a footballer.” is a disjunction of “Justin Bieber is a celebrity” and “Sachin Tendulkar is a footballer“. Sometimes we put the word ‘either‘ before the first statement to make the disjunction sound nice, but it is not necessary to do so, so far as a logician is concerned. The symbolic notation for disjunction is $\vee$ read ‘or’. If p and q are two statements, their disjunction is represented by $p \vee q$ and read as p or q. if….then: From two sentences we may construct one of the from “If . . . . . then . . .“; which is called a conditional sentence. The sentence immediately following IF is the antecedent, and the sentence immediately following THEN is the consequent. For example, “If 5 <6, 6<7, then 5<7” is a conditional sentence whith “5<6, 6<7” as antecedent and “5<7” as consequent. If p and q are antecedent and consequent sentences respectively, then the conditional sentence can be written as: “If p then q”. This can be mathematically represented as $p \Rightarrow q$ and is read as “p implies q” and the statement sometimes is also called implication statement. Several other ways are available to paraphrase implication statements including: 1. If p then q 2. p implies q 3. q follows from p 4. q is a logical consequence of p 5. p (is true) only if q (is true) 6. p is a sufficient condition for q 7. q is a necessary condition for p If and Only If:  The phrase “if and only if” (abbreviated as ‘iff‘) is used to obtain a bi-conditional sentence. For example, “A triangle is called a right angled triangle, if and only if one of its angles is 90°.” This sentence can be understood in either ways: “A triangle is called a right angled triangle if one of its angles is 90°” and “One of angles of a triangle is 90° if the triangle is right angled triangle.” This means that first prime sentence implies second prime sentence and second prime sentence implies first one. (This is why ‘iff’ is sometimes called double-implication.) Another example is “A glass is half filled iff that glass is half empty.“ If p and q are two statements, then we regard the biconditional statement as “p if and only if q” or “p iff q” and mathematically represent by ” $p\iff q$ “. $\iff$ represents double implication and read as ‘if and only if’. In the statement $p \iff q$, the implication $p \Rightarrow q$ is called direct implication and the implication $q \Rightarrow p$ is called the converse implication of the statement. # Other terms in logic: Stronger and Weaker Statements: A statement p is stronger than a statement q (or that q is weaker than p) if the implication statement $p \Rightarrow q$ is true. Strictly Stronger and Strictly Weaker Statements: The word ‘stronger‘(or weaker) does not necessarily mean ‘strictly stronger‘ (or strictly weaker). For example, every statement is stronger than itself, since $p \Rightarrow p$. The apparent paradox here is purely linguistical. If we want to avoid it, we should replace the word stronger by the phrase ‘stronger than‘ or ‘possibly as strong as‘. If $p \Rightarrow q$ is true but its converse is false ($q \not \Rightarrow p$), then we say that p is strictly stronger than q (or that q is strictly weaker than p). For example it is easy to say that a given quadrilateral is a rhombus that to say it is a parallelogram. Another understandable example is that ” If a blog is hosted on WordPress.com, it is powered with WordPress software.” is true but ” If a blog is powered with WordPress software , it is hosted on WordPress.com” is not true. # Logical Approach What exactly is the difference between a mathematician, a physicist and a layman? Let us suppose they all start measuring the angles of hundreds of triangles of various shapes, find the sum in each case and keep a record. Suppose the layman finds that with one or two exceptions the sum in each case comes out to be 180 degrees. He will ignore the exceptions and state ‘The sum of the three angles in a triangle is 180 degrees.’ A physicist will be more cautious in dealing the exceptional cases. He will examine then more carefully. If he finds that the sum in them some where 179 degrees to 181 degrees, say, then if will attribute the deviation to experimental errors. He will state a law – ‘The sum of the three angles of any triangle is 180 degrees.’ He will then watch happily as the rest of the world puts his law to test and finds that it holds good in thousands of different cases, until somebody comes up with a triangle in which the law fails miserably. The physicist now has to withdraw his law altogether or else to replace it by some other law which holds good in all the cases tried. Even this new law may have to be modified at a later date. And this will continue without end. A mathematician will be the fussiest of all. If there is even a single exception, he will refrain from saying anything. Even when millions of triangles are tried without a single exception, he will not state it as a theorem that the sum of the three angles in ‘any’ triangle is 180 degrees. The reason is that there are infinitely many different types of triangles. To generalise from a million to infinity is as baseless to a mathematician as to generalise from one to a million. He will at the most make a conjecture and say that there is a ‘strong evidence’ suggesting that the conjecture is true. The approach taken by the layman or the physicist is known as the inductive approach whereas the mathematician’s approach is called the deductive approach. # Inductive Approach In inductive approach, we make a few observations and generalise. Exceptions are generally not counted in inductive approach. # Deductive Approach In this approach, we deduce from something which is already proven. # Axioms or Postulates Sometimes, when deducting theorems or conclusion from another theorems, we reach at a stage where a certain statement cannot be proved from any ‘other’ proved statement and must be taken for granted to be true, then such a statement is called an axiom or a postulate. Each branch of mathematics has its own populates or axioms. For example, the most fundamental axiom of geometry is that infinitely many lines can be drawn passing through a single point. The whole beautiful structure of geometry is based on five or six such axioms and every theorem in geometry can be ultimately Deducted from these axioms. # Argument, Premises and Conclusion An argument is really speaking nothing more than an implication statement. Its hypothesis consists of the conjunction of several statements, called premises. In giving an argument, its premises are first listed (in any order), then connecting all, a conclusion is given. Example of an argument: Premises:   $p_1$         Every man is mortal. $p_2$                              Ram is a man. ———————————————————————————- Conclusion:                   $q$ Ram is mortal. Symbolically, let us denote the premises of an argument by $p_1, p_2, \ldots , p_n$ and its conclusion by $q$. Then the argument is the statement $(p_1 \wedge p_2 \wedge \ldots \wedge p_n) \Rightarrow q$. If this implication is true, the argument is valid otherwise it is invalid. To be continued…… ###### Suggested Readings: • Basic logic – connectives – NOT (gowers.wordpress.com) • Welcome to the Cambridge Mathematical Tripos (gowers.wordpress.com) 26.740278 83.888889 ## A Yes No Puzzle Friday, October 7th, 2011 00:11 / 19 Comments This is not just math, but a very good test for linguistic reasoning. If you are serious about this test and think that you’ve a sharp [at least average] brain then read the statement (only) below –summarize it –find the conclusion and then answer that whether summary of the statement is Yes or No. [And if you're not serious about the test ...then read the whole post to know what the stupid author was trying to tell you. ] STATEMENT: If the question you answered before you answered the question you answered after you answered the question you answered before you answered this one, was harder than the question you answered after you answered the question you answered before you answered this one, was the question you answered before you answered this one harder than this one? YES or NO? (more…) 26.740278 83.888889 ## Blog of the Month Awards – October 2011 Thursday, October 6th, 2011 11:56 / 11 Comments Reader’s brain is variable. It changes according to what it read. I have changed the pattern of selection and style of writing about BLOG OF THE MONTH. At the beginning of August, I planned that I will select some blogs from the education blog-o-sphere and will award to appreciate them for their excellent work. I know these awards will probably never make a difference but hope too that they’ll keep their good works on. So, here is the list of my 10 most favorite blogs, one of which, Gowers’s Weblog, is my Blog of The Month, for October 2011. • Gower’s Weblog: Blog of the Month for October 2011. Prof. Timothy Gowers (a.k.a. Tim Gowers) is a field medalist and an eminent mathematician. He is a member of the Department of Pure Mathematics and Mathematical Statistics at Cambridge University . He is ideal of many student and math majors including me. I was not a huge fan of his writing as he  has  written  only a few articles after I joined the web. But as he started his series on ‘CAMBRIDGE TEACHING‘, I have become a regular reader of his weblog. One math student must read his posts on Cambridge Teaching. Most Recent Post: Basic logic — relationships between statements — converses and contrapositives (more…) 26.740278 83.888889 ## Fermat Numbers Fermat Number, a class of numbers, is an integer of the form $F_n=2^{2^n} +1 \ \ n \ge 0$. For example: Putting $n := 0,1,2 \ldots$ in $F_n=2^{2^n}$ we get $F_0=3$, $F_1=5$, $F_2=17$, $F_3=257$ etc. Fermat observed that all the integers $F_0, F_1, F_2, F_3, \ldots$ were prime numbers and announced that $F_n$ is a prime for each natural value of $n$. In writing to Prof. Mersenne, Fermat confidently announced: I have found that numbers of the form $2^{2^n}+1$ are always prime numbers and have long since signified to analysts the truth of this theorem. However, he also accepted that he was unable to prove it theoretically. Euler in 1732 negated Fermat’s fact and told that $F_1 -F_4$ are primes but $F_5=2^{2^5} =4294967297$ is not a prime since it is divisible by 641. Euler also stated that all Fermat numbers are not necessarily primes and the Fermat number which is a prime, might be called a Fermat Prime. Euler used division to prove the fact that $F_5$ is not a prime. The elementary proof of Euler’s negation is due to G. Bennett. # Theorem: The Fermat number $F_5$ is divisible by $641$ i.e., $641|F_5$. # Proof: As defined $F_5 :=2^{2^5}+1=2^{32}+1 \ \ldots (1)$ Factorising $641$ in such a way that $641=640+1 =5 \times 128+1 \\ =5 \times 2^7 +1$ Assuming $a=5 \bigwedge b=2^7$ we have $ab+1=641$. Subtracting $a^4=5^4=625$ from 641, we get $ab+1-a^4=641-625=16=2^4 \ \ldots (2)$. Now again, equation (1) could be written as $F_5=2^{32}+1 \\ \ =2^4 \times {(2^7)}^4+1 \\ \ =2^4 b^4 +1 \\ \ =(1+ab-a^4)b^4 +1 \\ \ =(1+ab)[a^4+(1-ab)(1+a^2b^2)] \\ \ =641 \times \mathrm{an \, Integer}$ Which gives that $641|F_n$. Mathematics is on its progression and well developed now but it is yet not confirmed that whether there are infinitely many Fermat primes or, for that matter, whether there is at least one Fermat prime beyond $F_4$. The best guess is that all Fermat numbers $F_n>F_4$ are composite (non-prime). A useful property of Fermat numbers is that they are relatively prime to each other; i.e., for Fermat numbers $F_n, F_m \ m > n \ge 0$, $\mathrm{gcd}(F_m, F_n) =1$. Following two theorems are very useful in determining the primality of Fermat numbers: # Pepin Test: For $n \ge 1$, the Fermat number $F_n$ is prime $\iff 3^{(F_n-1)/2} \equiv -1 \pmod {F_n}$ # Euler- Lucas Theorem Any prime divisor $p$ of $F_n$, where $n \ge 2$, is of form $p=k \cdot 2^{n+2}+1$. Fermat numbers ($F_n$) with $n=0, 1, 2, 3, 4$ are prime; with $n=5,6,7,8,9,10,11$ have completely been factored; with $n=12, 13, 15, 16, 18, 19, 25, 27, 30$ have two or more prime factors known; with $n=17, 21, 23, 26, 28, 29, 31, 32$ have only one prime factor known; with $n=14,20,22,24$ have no factors known but proved composites. $F_{33}$ has not yet been proved either prime or composite. ###### Related articles • Video: Documentary on Proof of Fermat’s Last Theorem (wpgaurav.wordpress.com) 26.740278 83.888889 ## How to Draw the Famous Batman Curve Saturday, September 24th, 2011 16:41 / 3 Comments The ellipse $\displaystyle \left( \frac{x}{7} \right)^{2} + \left( \frac{y}{3} \right)^{2} - 1 = 0$ looks like this: So the curve $\left( \frac{x}{7} \right)^{2}\sqrt{\frac{\left| \left| x \right|-3 \right|}{\left| x \right|-3}} + \left( \frac{y}{3} \right)^{2}\sqrt{\frac{\left| y+3\frac{\sqrt{33}}{7} \right|}{y+3\frac{\sqrt{33}}{7}}} - 1 = 0$ is the above ellipse, in the region where $|x|>3$ and $y > -3\sqrt{33}/7$: That’s the first factor. (more…) 26.740278 83.888889
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 76, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482486248016357, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/131629-lagrange-multiplier-help.html
Thread: 1. Lagrange Multiplier help I've got two Lagrange Multiplier questions I need help on. They both say "Use Lagrange multipliers to find the maximum and minimum values of the frunction subject to the given constraint(s). Here is the first: $f(x,y)=e^{xy}; x^3+y^3=16$ I've gotten to $f_x=(\lambda)g_x$ which is $ye^{xy}=(\lambda)3x^2$, and $f_y=(\lambda)g_y$ which is $xe^{xy}=(\lambda)(-3y^2)$ and now I'm stuck... I don't know what to do. The second is $f(x,y,z)=8x-4z; x^2+10y^2+z^2=5$ I've gotten to $f_x=(\lambda)g_x, --> 8=(\lambda)2x$ $f_y=(\lambda)g_y, --> 0=(\lambda)20y$ $f_z=(\lambda)g_z, --> -4=(\lambda)2z$ And now I'm stuck. 2. I'll give you the second one... And maybe that'll show you the method and you may be able to solve the first one, if not I'll help you there as well. Alright, so your given: $f(x,y,z) = 8x - 4z$ Bound by the constraint function: $\phi(x,y,z) = x^2 + 10y^2 + z^2 = 5$ You've already shown that the components of the gradients are related by the constant multiple $\lambda$, which gave you: $8 = 2\lambda x$ $0 = 20\lambda y$ $-4 = 2\lambda z$ First thing is to try to solve each equation for a different variable if possible: $\frac{4}{\lambda} = x$ $0 = y$ (We can conclude this because neither x nor z can be undefined, and therefore $\lambda \neq 0$) $-\frac{2}{\lambda} = z$ You can see right off the bat that x and z are similar, but more specifically are: $-\frac{2}{\lambda} = z = -\frac{1}{2}\frac{4}{\lambda} = -\frac{1}{2}x$ Now, you plug this all in the constraint function $x^2 + 10(0)^2 + \left(-\frac{1}{2}x\right)^2 = 5$ $x^2 + \frac{x^2}{4} = 5$ $\frac{5}{4}x^2 = 5$ $x^2 = 2$ $x = \pm 2$ Now you plug this into the formula for z: $z = -\frac{1}{2}(\pm 2) = \mp 1$ So, the critical points are: $(2,0,-1)$ and $(-2, 0 , 1)$ You can plug these in on your own to find out whether they are a max or a min.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374533891677856, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/65082-tangents.html
# Thread: 1. ## tangents find an equation of the line tangent to f(x)=x^2-4x at P(3,-3) thank you 2. Originally Posted by mxhockey140 find an equation of the line tangent to f(x)=x^2-4x at P(3,-3) thank you Hi mxhockey, The tangent line you are looking for has the form $y=mx+b$ where m is the same as the gradient of your function f(x) at (3, -3). So just find f'(x) and plug in x = 3 to find m. 3. Well, we seek a line crossing a given point. And tangent means "toucher" that means the slope tangent = local slope of the curve at the given point. in the end we can calculate the local slope with usage of derivation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8796389102935791, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/43268/computing-a-best-fit-of-discrete-points-from-a-multipole-expansion-i-e-inver
# Computing a “best-fit” of discrete points from a multipole expansion, i.e. invert the multipole moments Take a field $\phi(\bf{x})$ created from a charge distribution contained within a radius $R$. The multipole expansion in spherical harmonics $Y_{\ell,m}$ outside of $R$ is approximated by: $$\phi({\bf x}) \approx \frac{1}{4\pi \epsilon_0} \sum_{\ell=0}^{\ell_{MAX}} \sum_{m=-\ell}^{\ell} \frac{4\pi}{2\ell +1} \alpha_{\ell, m} \frac{Y_{\ell,m}(\theta, \phi)}{r^{\ell +1}}$$ Given only the finite set of the multipole moments $\alpha_{\ell,m}$, is it possible to find a charge distribution of $N$ discrete charges $(q_i, r_i, \theta_i, \phi_i)$ that "best-fit" this potential? Right now I'm using a simple minima finder over the $3^N$ variables (see note 1), but I'm wondering if there is any prior work/observations on inverting the multipole moments. Note 1 If the positions of the charges are fixed, the charge magnitudes $q_i$ are simply linear combinations: $$\alpha_{\ell, m} = \sum_i^N q_i Y^*_{\ell,m}(\theta_i, \phi_i) r_i^\ell$$ thus simple linear algebra gives the best-fit charge magnitudes. Note 2 It's easy to see that this charge distribution need not be unique, indeed if $\ell_{MAX}=0$ any combination such that $\sum_i^N q_i \propto \alpha_{0,0}$ should work. This is ok, I'm more concerned with finding good approximations than the absolute best fit. Note 3 If the charge magnitudes are real (which is a given for a physical problem), the number of unique moments are reduced by roughly a factor of 2 since: $$\alpha_{\ell,-m} = (-1)^m \alpha_{\ell,m}^*$$ - It's a nice problem but the solution is indeed very non-unique - for example, you may scale the distances from the origin $r$ and the charges in various inverse ways, and rotate the four $+-+-$ charges defining a quadrupole around the axis, and do many other things. And because the solution isn't unique, it seems unlikely that there is a "canonical formula" of any sort. – Luboš Motl Nov 2 '12 at 20:37 @LubošMotl I agree, but the problem is motivated by a practical concern, making (pseudo) accurate "toy models" of electrostatics for protein solutions. As such any thoughts on the matter, canonical or not, would suffice as solutions in this case. I would expect that the symmetry of the problem should suggest something better than brute force optimization. – Hooked Nov 2 '12 at 21:55 ## 1 Answer This problem has a unique solution if you only allow charges of one sign; it is known as the moment problem and is one of the central problems of measure theory. The wikipedia article on it should provide a good starting point for reading on it. However, as Luboš points out, for a signed measure the moment problem is usually indeterminate. One way to phrase this is that there is a big set of charge distributions for which all moments vanish. (This includes, for instance, all bounded charge distributions with a conducting shell around them.) I don't know of results for finding solutions in the indeterminate case but phrasing the problem in these terms might help. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363085627555847, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-applied-math/204309-prove-sequence-converges-1-a.html
# Thread: 1. ## Prove a sequence converges to 1 I am new to the world of proofs and I'm working on an assignment for Analysis. While I understand the concept of convergence I'm finding that writing the proofs is difficult for me and I want to ask if my logic is sound here and what I can improve on. Problem: For each positive integer $n$, let $p_{n} = 1 - \frac{1}{n}$. Show that the sequence $p_{1}, p_{2}, p_{3}, ...$ converges to 1. Proof: To show that $p$ converges to 1, we must show that if $S$ is an open interval containing 1, then $\exists$ a positive integer $N$ such that if $n$ is a positive integer and $n >= N$, then $p_{n} \in S$. Let $S$ be an open interval containing 1 and let $n \in \mathbb{Z}^+$and $N \in \mathbb{Z}^+$. Let $S$ be the interval $(1-\epsilon, 1+\epsilon)$, $\epsilon > 0$. Then $|p_{n} - 1| < \epsilon$ in order for $p_{n} \in S$. Since $p_{n} = 1 - \frac{1}{n}$, $|p_{n} - 1| = |1 - \frac{1}{n} - 1| = |-\frac{1}{n}| = |-1 * \frac{1}{n}| = |\frac{1}{n}|$. And $|\frac{1}{n}| < \epsilon$. Since $n$ is positive, we can say $\frac{1}{n} < \epsilon \Longrightarrow \frac{1}{\epsilon} < n$. Now let $N = \frac{1}{\epsilon}$. Since for all $n > N p_{n} \in S$, $p$ converges to 1. A few things I'm concerned about are at the end when I let $N = \frac{1}{\epsilon}$; if I say this is it implied that $\frac{1}{\epsilon}$ is a positive integer, or can that not be assumed because we don't know that 1 evenly divides $\epsilon$? Also, I showed for $n > N$ instead of $n >= N$ but because of the way I formed the interval I'm not sure how to fix that. Any tips would be greatly appreciated!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319377541542053, "perplexity_flag": "head"}
http://cms.math.ca/10.4153/CMB-2011-135-9
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CMB Abstract view # On the Smallest and Largest Zeros of Müntz-Legendre Polynomials Read article [PDF: 143KB] http://dx.doi.org/10.4153/CMB-2011-135-9 Canad. Math. Bull. 56(2013), 194-202 Published:2011-06-29 Printed: Mar 2013 • Úlfar F. Stefánsson, School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332-0160, USA Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: LaTeX MathJax PDF ## Abstract Müntz-Legendre polynomials $L_n(\Lambda;x)$ associated with a sequence $\Lambda=\{\lambda_k\}$ are obtained by orthogonalizing the system $(x^{\lambda_0}, x^{\lambda_1}, x^{\lambda_2}, \dots)$ in $L_2[0,1]$ with respect to the Legendre weight. If the $\lambda_k$'s are distinct, it is well known that $L_n(\Lambda;x)$ has exactly $n$ zeros $l_{n,n}\lt l_{n-1,n}\lt \cdots \lt l_{2,n}\lt l_{1,n}$ on $(0,1)$. First we prove the following global bound for the smallest zero, $$\exp\biggl(-4\sum_{j=0}^n \frac{1}{2\lambda_j+1}\biggr) \lt l_{n,n}.$$ An important consequence is that if the associated Müntz space is non-dense in $L_2[0,1]$, then $$\inf_{n}x_{n,n}\geq \exp\biggl({-4\sum_{j=0}^{\infty} \frac{1}{2\lambda_j+1}}\biggr)\gt 0,$$ so the elements $L_n(\Lambda;x)$ have no zeros close to 0. Furthermore, we determine the asymptotic behavior of the largest zeros; for $k$ fixed, $$\lim_{n\rightarrow\infty} \vert \log l_{k,n}\vert \sum_{j=0}^n (2\lambda_j+1)= \Bigl(\frac{j_k}{2}\Bigr)^2,$$ where $j_k$ denotes the $k$-th zero of the Bessel function $J_0$. Keywords: Müntz polynomials, Müntz-Legendre polynomials MSC Classifications: 42C05 - Orthogonal functions and polynomials, general theory [See also 33C45, 33C50, 33D45] 42C99 - None of the above, but in this section 41A60 - Asymptotic approximations, asymptotic expansions (steepest descent, etc.) [See also 30E15] 30B50 - Dirichlet series and other series expansions, exponential series [See also 11M41, 42-XX] © Canadian Mathematical Society, 2013 © Canadian Mathematical Society, 2013 : http://www.cms.math.ca/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7693783044815063, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2068/complexity-of-arithmetic-in-a-finite-field/2070
# Complexity of arithmetic in a finite field? I am wondering what the complexities are of adding/subtracting and muliplying/dividing numbers in a finite field $\mathbb{F}_q$. I need it to understand an article I am reading. Thank you - 1 – mikeazo♦ Mar 10 '12 at 18:19 ## 3 Answers Well, if $q$ is a prime (and not $p^n$ with $n>1$), then addition, subtraction and multiplication can be performed by doing the traditional operations modulo $q$, that is: $a +_{\mathbb{F}_q}b \equiv (a+b) \bmod q$ $a -_{\mathbb{F}_q}b \equiv (a-b) \bmod q$ $a \times_{\mathbb{F}_q}b \equiv (a\times b) \bmod q$ As such, addition and subtraction can be done in time $O(log(q))$, while multiplication is typically done in time $O(log^2(q))$. Now, there are multiplication/modulo methods that are asymptotically faster, but they are typically slower for the range of $q$'s that we usually see in practice in cryptography; however, if your $q$ is huge, it would be reasonable to use these faster methods. The tricky one is division; that is not division on the field of integers followed by a modulus operation. Instead, it involves finding the multiplicative inverse of a number; that is, given $b$, we find the field member $b^{-1}$ such that $b \times b^{-1} = 1$. Then, $a / b = a \times b^{-1}$ Multiplicative inverses are typically found using the Extended Euclidean Method; a straight-forward implementation takes $O(log^3(q))$ time. Again, there are optimizations that you can do; however, division still is the most costly of the four operations (to the extent that we typically arrange the calculations to minimize the number of divisions, even at a cost of increasing the number of multiplies). - – fgrieu Mar 11 '12 at 11:04 Let $n = \lceil \log q \rceil$ (with "$\log$" being the base-2 logarithm, so $n$ is the size, in bits, of $q$). If $q$ is a prime integer (i.e. $\mathbb{F}_q$ is the field of integers modulo $q$), then classical implementations will have cost $O(n)$ for addition and subtraction, $O(n^2)$ for multiplications and divisions. The cost of multiplications can be asymptotically lowered by using Toom-Cook multiplication or Fast Fourier Transform (with fast algorithms for the final modular reduction, whose name currently evades me), down to about $O(n(\log n)^2)$. However, for numbers of the size used in cryptography (say, 2048-bit integers), these methods tend not to be worth the effort, and plain Montgomery multiplication is faster. At least so goes the usual wisdom; each new hardware architecture may challenge such results. For modular divisions, the fastest algorithm is the extended binary GCD algorithm, which is quadratic. One must note that even if Montgomery's multiplication and extended binary GCD are both in cost $O(n^2)$, the former tends to be, in practice, several dozen times faster than the latter, for numbers of the same size, because Montgomery's multiplication can rely on the hardware native multiplication opcode and process bits by chunks of 32 or 64, while binary GCD is a bit-by-bit algorithm. If $q$ is a power of 2 ($q = 2^n$), then field $\mathbb{F}_q$ is the quotient field obtained by taking polynomials in $\mathbb{F}_2[X]$ modulo a given irreducible polynomial $P$. Since all finite fields of same cardinal are isomorphic, one can choose the polynomial $P$ to be almost empty, i.e. a trinomial ($P = 1 + X^a + X^n$) or a pentanomial ($P = 1 + X^a + X^b + X^c + X^n$). This allows for a linear reduction modulo $P$. In practice, multiplication will be done with Karatsuba's algorithm with complexity about $O(n^{1.585})$. Also, in a binary field, squaring (and extracting a square root) can be done very efficiently, with $O(n)$ complexity. This is the Frobenius endomorphism, and is used to speed up computations on a special kind of elliptic curves called Koblitz curves. Squarings and square roots can be even further optimized to become "free" using normal bases, but this makes multiplication widely less efficient, so this is not considered to be a net gain in the general case. Besides the Handbook of Applied Cryptography (especially chapters 2 and 14), a good reference is chapter 2 of the Guide to Elliptic Curve Cryptography, which covers finite field arithmetics -- and, lo! chapter 2 of that book is the "free sample" chapter that can be downloaded as a PDF from the Web site. - The Handbook on Applied Cryptography (link to the pdf version is on Alfred's webpage) has some of the known techniques to do finite field arithematic. If you are doing arithmetic to implement Elliptic Curve Cryptography (note the comment made by Paulo), then there are methods that depends on whether you are doing it in Jacobian or Projective plane (inverse works fine in Jacobian and addition works great on Projective planes). You can refer to this paper for more ground level details. There has been considerable improvements to all the algorithms stated in the paper, but they involve more complex methods. - Elliptice curve operations are not field operations, though for their implementation these operations are usually used. – Paŭlo Ebermann♦ Mar 10 '12 at 19:13 Oh yes, I should have mentioned that explicitly. I will do it now. Thanks for bringing this to notice. – Jalaj Mar 10 '12 at 19:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334713220596313, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/38447/what-is-a-closed-orbit-that-is-not-separable-answered
## What is a closed orbit that is not separable? (ANSWERED) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is an example of an action of a linearly reductive group variety acting on an affine variety with the property that there exists a closed orbit that is not separable? To be more precisely, let's work over a fixed algebraically closed field $k$. Suppose that we are given an affine variety $X$ and a group variety $G$ acting on $X$. Given a closed point $x \in X$, we define the orbit $\operatorname{O}(x)$ to be the image of the map $G \to X$ given by $g \mapsto g x$ . We say that the orbit is separable if the natural map $$G \to \operatorname{O}(x)$$ given by $g \mapsto gx$ is separable. This question is only interesting in characteristic $p>0$. In this case, the condition that $G$ is linearly reductive is very strong: it implies $G$ is the product of a multiplicative torus and a finite group of order prime-to-$p$. - 2 This happens for every $G$ of positive dimension, since all one needs is a non-smooth subgroup scheme (such as kernel of Frobenius). Indeed, if $H$ is a closed $k$-subgroup scheme of $G$ then let $X = G/H$ equipped with the natural left $G$-action (so $X$ is smooth, since $G$ is smooth, regardless of how "bad" $H$ may be). Then the orbit map through $x = 1$ is the natural surjection $G \rightarrow X$ which is not separable precisely when $H$ is not smooth. The simplest example is $G = {\rm{GL}}_1$ acting on $X = G$ via $t.x = t^p x$, whose orbit map through $x = 1$ is $t \mapsto t^p$. – BCnrd Sep 12 2010 at 5:39 Thanks! The formulation in terms of the stabilizer really clarifies things! – jlk Sep 12 2010 at 6:57 Can I suggest that @BCnrd leave his answer as an answer, so that the question can be marked as "answered" rather than have it expressed so in the title? I know that BCnrd doesn't like to leave answers, so another option is for OP to leave copy the answer into the answers (and mark the answer as CW if you want to make sure you don't pick up points for someone else's answer). I would do so, but I wanted to give BCnrd a chance. – Theo Johnson-Freyd Sep 13 2010 at 2:37 Believe me, Theo, you are far from the first to have suggested that. – Ben Webster♦ Sep 13 2010 at 3:56 @BCnrd, Ben Webster submitted your answer, and I have accepted it so the problem will be marked "Answered." Please let me know if you prefer that I not do this. – jlk Sep 13 2010 at 4:28 ## 1 Answer BCnrd writes: "This happens for every G of positive dimension, since all one needs is a non-smooth subgroup scheme (such as kernel of Frobenius). Indeed, if H is a closed k-subgroup scheme of G then let X=G/H equipped with the natural left G-action (so X is smooth, since G is smooth, regardless of how "bad" H may be). Then the orbit map through x=1 is the natural surjection G→X which is not separable precisely when H is not smooth. The simplest example is G=GL(1) acting on X=G via $t\cdot x=t^px$, whose orbit map through $x=1$ is $t\mapsto t^p$." - Here is your obligatory +1. – S. Carnahan♦ Sep 13 2010 at 4:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523559212684631, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/approximation+sequences-and-series
Tagged Questions 3answers 41 views Series evaluated to $m$ terms, approximating the error Given a series $\displaystyle\sum_{n=0}^\infty a_n$, how can we bound the error (which I shall denote with $R_n$) when we evaluate it to $m$ terms? $$\sum_{n=0}^\infty a_n \approx \sum_{n=0}^m a_n$$ ... 0answers 53 views Expansion in powers Let $n=2k, k \in Z_+$. Let P_k\left(\frac{t}{\sqrt n}\right)=n!\sum_{\begin{smallmatrix} n_1+\ldots+n_k=n \\ ... 0answers 35 views alternating series estimation with integral? We know that there are some approximation like Abel's identity. If $\lambda_n$ is increasing and $$C(x)=\sum_{\lambda_n\le x}c_n,\qquad(c_n\in\mathbb{C})$$ Then if $X\ge\lambda_1$ and $\phi(x)$ has ... 0answers 45 views How can weakly/strongly decreasing or increasing approximate sums be explained? I'm reading around big O to get some concept about performance for data structures. The mathematics book recommended by the open book ~ maths for computer science In the book (pg 456) part of the ... 0answers 44 views Intuition for approximating Ei(x) I'm working with a function that is defined to be the sum of a series, and I'd like to either find its values, or approximate them analytically: \$F(x) = \frac{1}{w} ... 1answer 159 views Approximating a weird sum How can I approximate the sum$$\sum_{k=1}^n \left(\frac{2k}{n} \left\lceil \frac{n}{k} \right\rceil \left\{ \frac{n}{k} \right\}-1\right)$$ where $\{x\}$ is the fractional part function, and \$\lceil ... 1answer 42 views Prove that s is finite and find an n so large that $S_n$ approximate s to three decimal places. $$S=\sum_{k=1}^\infty\left(\frac{k}{k+1}\right)^{k^2};\hspace{10pt}S_n=\sum_{k=1}^n\left(\frac{k}{k+1}\right)^{k^2}$$ Let $S_n$ represent its partial sums and let $S$ represent its value. Prove that ... 1answer 143 views Is this sequence convergent? I heard this question from a professor a couple years ago. I still think about it... Does the sequence $(a_n)_{n\in \mathbb N}$ with $$a_n=\sqrt[n]{|\sin(n)|}$$ converges ( to $1$ ) ? I believe ... 1answer 70 views Approximating a simple sum Can somone help me find an assymptotic formula for n, for fixed x , for this sum , perhaps an inequality would be even better, or some bound on the error. $$\sum_{k=1}^n \frac{1}{\log(kx)}$$ I need ... 0answers 58 views Rapidly convergent series for $\sum_{J=0}^{\infty} (2 J + 1) e^{-\beta J(J+1)}$ (rigid rotor) I need to evaluate this series for arbitrary $\beta > 0$: $Q = \sum_{J=0}^{\infty} (2 J + 1) e^{-\beta J(J+1)}$ Is it related to a known transcendental function? From the research I did, it ... 1answer 34 views How to approximate a complex linear homogenous recurrence with constant coefficients with a simple one? Is there some standard way to approximate a complex linear homogenous recurrence with constant coefficients with a simple one? For example, I might want to approximate ... 2answers 254 views Summation of divergent series of Euler: $0!-1!+2!-3!+\cdots$ Consider the series $$\sum\limits_{k=0}^\infty (-1)^kk!x^k\in\mathbb{R}[[x]]$$ Let $s_n(x)$ be partial sum. And let $\omega_{k,n}=(k!^2(n-k)!)^{-1}$. My question is: Prove that ... 0answers 58 views proof of one inequality with sums Please help me to prove the following inequality: Fix $k, m \in Z_+$ and for $j \in Z_+$ set \begin{align*} a_j^{(1)}=a_j=\sum_{i=0}^{\min\{j,k\}}\frac{1}{i!6^i}\frac{(-1)^{j-i}}{(2(j-i)+1)!} ... 1answer 156 views Using binomial theorem find general formula for the coefficients Using binomial thaorem (http://en.wikipedia.org/wiki/Binomial_theorem) find the general formula for the coefficients of the expantion: ... 1answer 233 views Approximating the logarithm of sum I would like to approximate $$\ln(\sum_{k=0}^n(n-2k)^p)$$ Here $p\geq 2$ 0answers 106 views Calculation of sum I am wondering if it is possible to calculate or approximate the following sum $$\sum_{k=0}^l\frac{(l-2k)^p(2l+k(k-1))l^{k-1}}{(k+3)(k+2)}$$here $p \geq 2$. Thank you. 0answers 240 views Calculation of a 'double' sum Let $n \in N$ and $q\geq 2$. I am trying to calculate the following sum: $$\sum_{i=0}^{\sqrt n/2}\sum_{j= i \sqrt n }^{(i+1)\sqrt n}\frac{(-1)^q2^q(\frac{n}{2}-j)^q}{(n-j)!j!}$$ Any help will be ... 1answer 126 views Best and most efficient way to numerically compute $e$? There are many well-known methods for efficiently numerically computing $\pi$, such as Chudnovsky's Method or perhaps Gauss-Legendre's algorithm. I was wondering what the best method for computing $e$ ... 1answer 178 views Newton-Raphson's Method to find $\sqrt{2012}$ I am asked to find $\sqrt{2012}$ using Newton-Raphson's Method with the following recursive method $$x_{n+1} = \frac{1}{2} (x_n + \frac{a}{x_n})$$ I notied that give same answers as using ... 2answers 308 views How to bound the truncation error for a Taylor polynomial approximation of $\tan(x)$ I am playing with Taylor series! I want to go beyond the basic text book examples ($\sin(x)$, $\cos(x)$, $\exp(x)$, $\ln(x)$, etc.) and try something different to improve my understanding. So I ... 4answers 198 views How could we manually approximate $\sum_{i=1}^{50} i! = 3.1035 \times 10^{64}$? How could we manually approximate $$\sum_{i=1}^{50} i!$$ to the value $3.1035 \times 10^{64}$? I faced this question in my aptitude test,there were four option given,I couldn't solve it during ... 1answer 356 views A series problem by Knuth I came across the following problem, known as Knuth's Series which originally was an American Mathematical Monthly problem. Prove that \sum_{n=1}^\infty ... 2answers 131 views Find fast exact value for numbers in the form $\sum_{k=min}^{Max}\frac{1}{k}$ I know I could start multiplying by all denominators and try to get the exact value that way but is there some smarter way or shortcut? Let's take simple example: \$\displaystyle ... 3answers 343 views $\lim_{n\to\infty} f(2^n)$ for some very slowly increasing function $f(n)$ I should be able to answer this myself, but feel insecure anyway. I want to know, whether a function f(n) is bounded if n goes to infinity (and if it's bounded, the limit). Heuristically it appears ... 3answers 2k views Motivation for Ramanujan's mysterious $\pi$ formula The following formula for $\pi$ was discovered by Ramanujan: $$\frac1{\pi} = \frac{2\sqrt{2}}{9801} \sum_{k=0}^\infty \frac{(4k)!(1103+26390k)}{(k!)^4 396^{4k}}\!$$ Does anyone know how it works, or ... 3answers 178 views Determine speed of the object at the current time by the non-uniform time sample Here is a time sample: $Q = \{(t_i, x_i) | 0 \leq x_i \leq x_{i+1}, 1 \leq i \leq n\}$ and rules: (1) $T_1 \leq t_{i+1} - t_i < T_2$ where $T_1, T_2 > 0$ (2) $x_i$ comes with error: \$x_i = ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9124069213867188, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/72886-question-functions-domains.html
# Thread: 1. ## Question on functions & domains Im having a hard time wrapping my mind around domains. I have no problem finding the domain of a function but what is stumping me is determining the function of a given domain. An example would be find the function with the given domain: [-5,-2)U(2,5]. Thanks for your help 2. This can be tricky. For this problem, we have two restrictions to make. After -5, 5 we need the domain to be undefined but include the endpoints and at 2 we need F to be undefined. The way you can restrict a function from being able to extend beyond a point is by using square roots. Cross over into a negative square root and the domain is done. To remove a point, use a rational function. (1/(x-2)) - now x=2 is undefined. To make this easier at the end points though let's use 3/(x-2). So at x=-5, 5 we want F=0 so it is included in the domain. The sign will be different though at the different endpoints, but if we square the expression above, both points will be positive. $\left( \frac{3}{x-2} \right) ^2$. We need to subtract 1 from this though to make f=0 at x=-5,5 and this in fact works. $\sqrt{\left( \frac{3}{x-2} \right) ^2 -1}$ If you check for values beyond the endpoints of the required domain I think it checks out. This isn't easy to follow but I kind of just wrote down my inner monologue. I hope it helps. EDIT: Oh my God. Grrrrrrrrrrrrrrrrrrr. I missed a crucial negative. [-5,-2)U(2,5] Disregard this but maybe it can help some. Sorry.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9132272601127625, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/61990/how-to-show-that-gcda-b-axby-implies-gcdx-y-1/62005
# How to show that $\gcd(a,b) = ax+by \implies \gcd(x,y)=1$? Assume that $$\gcd(a,b)=ax+by$$ for some $a, b, x, y \in \mathbb Z$. How do I show that $\gcd(x,y)=1$? (Hint: contradiction.) - 1 What do you mean with "syt"? – ulead86 Sep 5 '11 at 10:46 12 You have asked many questions and accepted few answers. Yet you ask another question. This does not compute. – Gerry Myerson Sep 5 '11 at 10:52 1 Short answer: you can't. Or at least, just because the gcd can be expressed in that form doesn't mean that it is 1. – Josh Chen Sep 5 '11 at 10:55 6 @Josh: I think you misread the question. You are asked to show that $\gcd(x,y)=1$, not $\gcd(a,b)=1$. (Perhaps you are seeing the same formatting error that I see -- the LaTex expression $\gcd(x,y)=1$ is above the first line instead of below it.) – TonyK Sep 5 '11 at 11:37 1 @alvoutila: I am not sure by the way the question is put whether you know the answer to the question and are giving me a hint, or whether the question comes with a hint, and you have not been able to solve it. In case it helps, the hint suggests assuming that there is a number c > 1 which divides both $x$ and $y$. – Mark Bennet Sep 5 '11 at 13:20 show 2 more comments ## 3 Answers Suppose $\gcd(a,b) = ax + by = d$. Then $\exists \ u, v \in \mathbb{N}$ such that $a = u \cdot d$ and $b = v \cdot d$. So then $d = ax + by = d (u x + v y)$, or in other words $u x + v y = 1$ with $u,v \in \mathbb{N}$. So $\gcd(x,y) = 1$. - HINT $\rm\ \ \ (a,b)\:|\:a,b,\ \ c\:|\:x,y\ \Rightarrow\ (a,b)\:c\ |\ a\ x + b\ y\: =\: (a,b)\ \Rightarrow\ c\:|\: 1\:.\:$ Put $\rm\ c = (x,y)\:.$ - 3 When I saw this question in the feed, I thought: "Shouldn't Bill be the one giving hints ?!?!" :) – The Chaz 2.0 Sep 5 '11 at 14:14 2 @The Only when SE has neural RSS feed I can be FGITW while asleep! – Gone Sep 5 '11 at 14:24 I'm sure Skynet is working on it! – The Chaz 2.0 Sep 5 '11 at 14:57 @Bill: What exactly is the hint-part of your answer? You left it to the reader to prove that if $(x,y)|1$ then $(x,y) = 1$? :) – TMM Sep 5 '11 at 15:04 @Thi I've slightly altered the presentation. If this doesn't answer your query please let me know. – Gone Sep 5 '11 at 15:15 A proof by contradiction: Suppose $\gcd(x,y) = d > 1$. Then $\exists \ u,v \in \mathbb{Z}$ such that $x = u \cdot d$ and $y = v \cdot d$. So then $\gcd(a,b) = ax + by = d(au + bv)$, so $au + bv = \gcd(a,b)/d < \gcd(a,b)$ with $u, v \in \mathbb{Z}$. But this contradicts the fact that $\gcd(a,b)$ is the greatest common divisor of $a$ and $b$. Therefore our assumption $d > 1$ was wrong, and $\gcd(x,y) = 1$. - 2 Setting up a proof by contradiction seems unnecessarily complicated here, so I'm not sure why they would give that hint. – TMM Sep 5 '11 at 13:42 I don't get the point. Why do you get that gcd(a,b) isn't the greatest common divisor? How gcd(a,b)/d<gcd(a,b) indicates that gcd(a,b) isn't the greatest common divisor? – alvoutila Sep 5 '11 at 15:41 1 – TMM Sep 5 '11 at 16:03 In other words, you just use fact that gcd(a,b) is as follows( definition) to conclude contradiction with the assumption(antithesis)? – alvoutila Sep 5 '11 at 16:33 1 Yes. The only assumption made was that $\gcd(x,y) > 1$, which led to the contradiction that $0 < au + bv < \gcd(a,b)$ with $a,b,u,v \in \mathbb{Z}$, which conflicts with the definition of $\gcd(a,b)$. Therefore the assumption is wrong and $\gcd(x,y) = 1$. – TMM Sep 5 '11 at 18:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493737816810608, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2012/09/15/irreducible-modules/?like=1&source=post_flair&_wpnonce=5092e672ce
# The Unapologetic Mathematician ## Irreducible Modules Sorry for the delay; it’s getting crowded around here again. Anyway, an irreducible module for a Lie algebra $L$ is a pretty straightforward concept: it’s a module $M$ such that its only submodules are $0$ and $M$. As usual, Schur’s lemma tells us that any morphism between two irreducible modules is either $0$ or an isomorphism. And, as we’ve seen in other examples involving linear transformations, all automorphisms of an irreducible module are scalars times the identity transformation. This, of course, doesn’t depend on any choice of basis. A one-dimensional module will always be irreducible, if it exists. And a unique — up to isomorphism, of course — one-dimensional module will always exist for simple Lie algebras. Indeed, if $L$ is simple then we know that $[L,L]=L$. Any one-dimensional representation $\phi:L\to\mathfrak{gl}(1,\mathbb{F})$ must have its image in $[\mathfrak{gl}(1,\mathbb{F}),\mathfrak{gl}(1,\mathbb{F})]=\mathfrak{sl}(1,\mathbb{F})$. But the only traceless $1\times1$ matrix is the zero matrix. Setting $\phi(x)=0$ for all $x\in L$ does indeed give a valid representation of $L$. ## 1 Comment » 1. [...] might be surmised from irreducible modules, a reducible module for a Lie algebra is one that contains a nontrivial proper submodule — [...] Pingback by | September 16, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 13, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8356176018714905, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/179529/what-are-some-examples-of-vector-spaces-that-arent-graded
What are some examples of vector spaces that aren't graded? From wikipedia: a vector space $V$ is graded if it decomposes into direct sum $\oplus_{n \geq 0} V_n$ of vector spaces $V_n$. So as far as I understand things, any vector space with a countable basis is graded: Let $V$ be a vector space over a field $k$ with basis $\{v_n\}_{n\in\mathbb{N}}$, then $V = \oplus_{n\geq 0} k\cdot v_n$. Then the only vector spaces that I can think of that aren't obviously graded are things like $C(X)$, the space of continuous functions on some manifold $X$ Is this correct? are there any more? or do I not understand something? Thanks - 1 Answer Grading isn't a property of a vector space: it's extra structure attached to a vector space, in the same way that a multiplication is an extra structure you attach to a set to make it a group. So this is a little like asking "what are some examples of sets that aren't groups?" (As it turns out, every set can be equipped with a group structure, and this is equivalent to the axiom of choice. But this is missing the point.) Every vector space admits a trivial grading in which $V_0 = V$ and $V_n = 0$ for all $n \ge 1$. But we often encounter vector spaces (such as the space of polynomials) with natural gradings, they are usually nontrivial, and taking advantage of this extra structure is useful in various ways. - well that certainly clears things up. thanks! – mebassett Aug 6 '12 at 16:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9604199528694153, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/192043-inequality-two-metrics.html
# Thread: 1. ## inequality with two metrics I want to prove that $sup \ x \in C[0,1] |f_n(x)-f(x) | \geq ({\displaystyle\int^1_0 |f_n(x)-f(x)|^2 \ dx})^{1/2}$ $({\displaystyle\int^1_0 |f_n(x)-f(x)|^2 \ dx})^{1/2}$ $\leq$ (by c-s inequality) $({\displaystyle\int^1_0 |f_n(x)|^2 \ dx})^{1/2} + ({\displaystyle\int^1_0 |f(x)|^2 \ dx})^{1/2}$ $\leq$ $sup |f_n(x)| + sup|f(x)|$ However I am unsure where to go now 2. ## Re: inequality with two metrics Originally Posted by FGT12 I want to prove that $sup \ x \in C[0,1] |f_n(x)-f(x) | \geq ({\displaystyle\int^1_0 |f_n(x)-f(x)|^2 \ dx})^{1/2}$ $({\displaystyle\int^1_0 |f_n(x)-f(x)|^2 \ dx})^{1/2}$ $\leq$ (by c-s inequality) $({\displaystyle\int^1_0 |f_n(x)|^2 \ dx})^{1/2} + ({\displaystyle\int^1_0 |f(x)|^2 \ dx})^{1/2}$ $\leq$ $sup |f_n(x)| + sup|f(x)|$ However I am unsure where to go now If $M = \sup_{x\in[0,1]}|f_n(x)-f(x)|$ then $|f_n(x)-f(x)|\leqslant M$ for all x in [0,1]. It follows that $\int_0^1|f_n(x)-f(x)|^2dx\leqslant\int_0^1M^2dx = M^2.$ Take the square root of both sides to get the inequality that you want (no need for Cauchy–Schwarz).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9568706750869751, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/26258/the-laplace-transform-of-the-first-hitting-time-of-brownian-motion/26286
# The Laplace transform of the first hitting time of Brownian motion Let $B_t$ be the standard Brownian motion process, $a > 0$, and let $H_a = \inf \{ t : B_t > a \}$ be a stopping time. I want to show that the Laplace transform of $H_a$ is $$\mathbb{E}[\exp(-\lambda H_a)] = \exp (-\sqrt{2\lambda} H_a)$$ by considering the martingale $$M_t = \exp \left(\theta B_t -\frac{1}{2}\theta^2 t\right)$$ There's an obvious argument to follow here: assuming the optional stopping theorem applies, we have $$1 = \mathbb{E}[M_{H_a}] = \mathbb{E} \left[ \exp \left(\theta a - \frac{1}{2}\theta^2 H_a\right) \right] = \exp(\sqrt{2\lambda} a) \mathbb{E} \left[ \exp(-\lambda H_a) \right]$$ where $\theta = \sqrt{2\lambda}$. This is exactly what we wished to show. However, as far as I can tell, the hypotheses of the optional stopping theorem are not satisfied here. Here is the statement I have: If $(X_n)$ is a martingale and $T$ is an a.s. bounded stopping time, then $\mathbb{E}[X_T] = \mathbb{E}[X_0]$. I think not all is lost yet. $M_t > 0$ for all $t$, so the martingale convergence theorem applies, and $M_t \to M_\infty$ a.s. for some integrable random variable $M_\infty$. For each $t$, $H_a \wedge t = \min \{ H_a, t \}$ is a bounded stopping time, so certainly $\mathbb{E}[M_{H_a \wedge t}] = \mathbb{E}[M_0]$. But, $$\mathbb{E}[M_{H_a \wedge t}] = \mathbb{E}[M_{H_a} \mathbf{1}_{\{H_a \le t\}}] + \mathbb{E}[M_t \mathbf{1}_{\{H_a > t\}}]$$ and clearly what one wants to do is to take $t \to \infty$ on both sides. But here's where I get stuck: I'm sure I need a convergence theorem here in order to conclude that the equation remains valid in the limit. Now, $0 < M_{H_a} = \exp \left(\theta a - \frac{1}{2} \theta^2 H_a \right) \le \exp(\theta a)$, so the dominated convergence theorem applies, and so $$\lim_{t \to \infty} \mathbb{E}[M_{H_a} \mathbf{1}_{\{H_a \le t\}}] = \mathbb{E}[M_{H_a} \mathbf{1}_{\{H_a < \infty\}}]$$ and I believe Fatou's lemma gives me that $$\liminf_{t \to \infty} \mathbb{E}[M_t \mathbf{1}_{\{H_a > t\}}] \ge \mathbb{E}[M_{\infty} \mathbf{1}_{\{H_a = \infty\}}]$$ but I think what I need is the equality $$\lim_{t \to \infty} \mathbb{E}[M_t \mathbf{1}_{\{H_a > t\}}] = \mathbb{E}[M_\infty \mathbf{1}_{\{H_a = \infty\}}]$$ and as far as I can tell neither the monotone convergence theorem nor the dominated convergence theorem applies here. Is there anything I can do to rescue this line of thought? - 2 Use $0\le M_t\mathbf{1}_{\{H_a>t\}}\le\exp(\theta a-\frac12\theta^2t)$. – Did Mar 10 '11 at 21:05 @Didier: Thanks! Not sure why I keep missing the obvious. – Zhen Lin Mar 10 '11 at 21:24 3 +1 Way to show your work! – Byron Schmuland Mar 10 '11 at 23:34 This is to second Byron's comment. In fact, I wanted to mention this in my first comment but I forgot. – Did Mar 11 '11 at 6:12 2 @Didier: If you post your comment as an answer, we can vote it up and put this question to bed. – Nate Eldredge Jun 9 '11 at 19:24 ## 3 Answers Use the fact that $0\leqslant M_t\mathbf{1}_{\{H_a>t\}}\leqslant\exp\left(\theta a−\frac12\theta^2t\right)$. - I m not sure that I understand properly the question. We have $M_{H_a\wedge t}\to M_{H_a}$ almost surely, and $M_{H_a\wedge t}<e^{\theta.a}$ The right hand side is constant so integrable, so doesn't the dominated convergence readily applies for $t \to +\infty$ ? Am i missing something here ? Regards - No, I'm afraid uniform integrability is something I never really understood well and have completely forgotten. – Zhen Lin Mar 11 '11 at 7:27 It is not true that a positive martingale converges in $L^1$. It converges a.s. and is bounded in $L^1$, but may not be uniformly integrable. The critical binary branching process is an example. – Byron Schmuland Mar 11 '11 at 16:06 In fact, the OP's positive martingale does converge to zero almost surely and therefore it cannot be UI. – Byron Schmuland Mar 11 '11 at 16:10 @Byron Schmuland : You are right I edited my post consequently, sorry about that Zhen Lin. – TheBridge Mar 11 '11 at 16:53 I don't think you need to worry about the case $H_a = \infty$ - for the simple reason that this is an event of probability zero. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9297124147415161, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/5483/is-machs-principle-wrong
Is Mach's Principle Wrong? This question was prompted by another question about a paper by Woodward(not mine). IMO Mach's principle is very problematic (?wrong) thinking. Mach was obviously influenced by Leibniz. Empty space solutions in GR would result in a Minkowski metric and would suggest no inertia. Mach's principle seems incompatible with GR. Gravitational waves could also be a problem. I had thought that papers like one by Wolfgang Rindler had more or less marginalised the Mach Principle, but I see lots of internet discussion of it. Is it correct? Wrong? Is there evidence? (frame dragging experiments)? Let's use this definition from ScienceWorld.Wolfram.com "In his book The Science of Mechanics (1893), Ernst Mach put forth the idea that it did not make sense to speak of the acceleration of a mass relative to absolute space. Rather, one would do better to speak of acceleration relative to the distant stars. What this implies is that the inertia of a body here is influenced by matter far distant." - There are so many different versions of Mach's principle. Which one do you have in mind? – QGR Feb 19 '11 at 16:48 1 @QGR --yes there are variants. None in particular..the principle itself (or variants) seem to have adherents and enemies of relatively mainstream physicists. The question is argumentative, and that seems to be the most common word used to close questions. Sometimes "argumentative" is a catalyst to a good discussion, but I think in this case degree of belief may coincide with Leibniz' influence on the scientist (Smolin, Barbour). I was wondering more if theoretical or technical, or experimental considerations could (partially) invalidate it rather than philosophical arguments. – Gordon Feb 19 '11 at 17:22 4 @Gordon: The Stack* sites are not designed for "discussion" and they are not good at "discussion". They are designed for and good at Questions that have Answers. If you are trying to start a discussion you're doing it wrong. And a vague, imprecise question like "Is [something ill-defined] wrong?" is at serious risk of not having Answers which means it does not belong. – dmckee♦ Feb 19 '11 at 17:52 2 I think of Mach's principle (and the anthropic one) as one of those hotel showers that changes abruptly from being too hot or too cold, but is never quite comfortable. Similarly, these are either trivially right, or trivially wrong, depending on how you define things, but either way not all that useful or interesting. It always comes down to discussions of semantics, or who was right or wrong etc. – user566 Feb 19 '11 at 23:11 1 Existence of gravitational waves does not require GR. Gravitational waves should be expected in a purely classical field theory. If there is a charge and a field which is produced by the charge and the charge is accelerating, the field experiences wave. This is a purely classical concept. So it is incorrect to claim that gravitational waves contradict Mach: the waves are produced by matter and are matter themselves. The wave is just dynamic field. – Anixx Feb 21 '11 at 5:28 show 8 more comments 8 Answers Mach's principle has influenced Einstein but the final formulation of general relativity as of 1916 clearly invalidates Mach's conjecture. According to Mach's principle, motion - including accelerating and rotating one - may only be defined relatively to other objects. That would imply that there can't exist any gravitational waves. However, general relativity predicts and experiments confirm that gravitational waves do exist: the relevant observations were awarded by the 1993 physics Nobel prize, too. The waves are vibrations of the space itself. It means that the metric tensor remembers the information about the geometry - and curvature at each point, even in the empty space, something that Mach's principle specifically wanted to prohibit. Moreover, the perceptions and other effects of acceleration were supposed to be determined by comparisons with distant objects. This simple fact itself violates locality that has become important already in special relativity, and was simply inherited by general relativity. If you care about history, the new cold relationship is mutual: much like general relativity rejected Mach's principle, Mach rejected general relativity - and already special relativity, in fact. ;-) If you care about sociology, there's been a poll among physicists active in relativity, and a vast majority of them would also say that Mach's principle is invalidated by general relativity. Some people sometimes say that some effects predicted by general relativity, such as frame-dragging, are "Machian" in character. I think it is very misleading because it tries to make the listeners think that Mach's principle may be made compatible with the observations. It's very questionable what Mach's principle would predict about frame-dragging because Mach's principle has never become any viable candidate for a physical theory. But the idea that frame-dragging is Machian is more ideology and hype than a valid observation. Despite the vagueness about such very detailed effects, Mach's principle has said enough for us to be sure that it's incorrect in all of its forms. Well, there's a lot of discussion on the Internet about long-dead ideas in physics - and maybe mostly about them. However, the Internet has nothing to do with the current state of physics. - 1 @Lubos - well the arxiv is likely a primary source of info dissemination and it is on the internet and I would argue that it is influential and that the internet will become more important, not less. That doesn't minimize all the garbage on it as well. – Gordon Feb 19 '11 at 19:34 It doe depend on the formulation of "Mach's principle"--you can look at the fact that the flat space inside a spinning shell of mass rotates with respect to the flat space outside of the spinning shell of mass as a vindication of a certain version of "Mach's principle". – Jerry Schirmer Feb 19 '11 at 21:33 @Lubos, you say "This simple fact itself violates locality that has become important already in special relativity". Which seems to point out that the stars rotating cannot instantaneously influence an observer and so the effect must be different compared to the effect of the fixed stars on a rotating observer. So why did Einstein even bother to look at Mach's Principle? – John McVirgo Feb 20 '11 at 0:31 Dear @Gordon, arXiv is representative of the whole literature. Still, you should make at least some post-publication quality tests. If a paper claims a discovery of a new fundamental thing and it has less than 50 citations after many years, it's probably wrong, and you need to rely on other people's knowledge, well, then this paper is probably wrong. – Luboš Motl Feb 20 '11 at 8:46 2 @JerrySchirmer: That in fact invalidates Mach's principle, because in that model, while two forces equivalent to centrifugal and Coriolis force arise, an axial one with no fictitious force counterpart manifests as well. A rotating shell and a stationary shell are still absolutely distinguishable. – C.R. Feb 6 '12 at 9:59 show 4 more comments Using Mach's 1893 definition of Mach's principle condemns the discussion to irrelevance. It's like posting on physics.SE with a question titled "How is the emission spectrum of hydrogen determined?," but then saying in the body of the question that we want an answer written in terms of the aether and Newtonian mechanics. In the 1960's and 70's, there was a golden age of tests of GR, and one of the most active topics was testing GR against alternative theories such as Brans-Dicke gravity. B-D gravity is physically a very well motivated theory. The original paper is available online http://loyno.edu/~brans/ST-history/ and is very readable even if you're not a specialist. The idea of B-D gravity is to couple matter to a scalar field $\phi$, which provides a physical mechanism for Mach's idea that an object's inertia comes from the other matter in the universe. B-D gravity is more Machian than GR. Neither GR not B-D is completely Machian or completely non-Machian. B-D gravity has a dimensionless parameter $\omega$. In the limit $\omega\rightarrow\infty$, B-D gravity reduces to GR. Brans and Dicke committed themselves to the idea that "[...]in any sensible theory $\omega$ must be of the general order of magnitude of unity." This makes the theory falsifiable. Experiments show that $\omega$ must in fact be quite high. The best current limit comes from the Cassini probe, which requires $\omega \ge 40,000$. Therefore, B-D gravity should be considered as falsified. So the modern, sensible answer to the OP's question is: Mach's principle is false, in the sense that experiments determine the universe to be no more Machian than GR -- which is not very Machian. - Mach's principle, if interpreted charitably, requires that one include horizons as matter, along with gravitational waves, and light, and all particles. This is required to include black holes, and for consistency requires cosmological horizons too. Once you understand that "matter" means "horizon", the statement that all rotation is relative to distant horizons is just a stunted classical version of the holographic principle, and is sort of vacuously true. - In psychology there is a special effect called the "verbal overshadowing effect". It concerns the phenomenon that describing a previously seen face impairs recognition of this face. Mach's principle is essentially target of this psychological effect. It is theoretically overshadowed in such a way that most physicists do not relate to the underlying empirical core but to the theoretical context under discussion. Hermann Bondi and Joseph Samuel have tried to fix this observational core of Machs Principle. (The Lense–Thirring Effect and Mach’s Principle, Physics Letters A 228, 1997, S. 121–126) They called it "Mach0": "The universe, as represented by the average motion of distant galaxies, does not appear to rotate relative to local inertial frames." This "coincidence" is measured to a very high accuracy: 0.25 milliarcsec/year. (J. Kovalevsky, et. al. "The Hipparcos catalogue as a realisation of the extragalactic reference frame", Astron. Astrophysics. 323, 620 - 633 (1997) It is just this coincidence that has theoretically to be explained. General Relativity is only a preferred theoretical tool of explaining this coincidence, but all attempts failed. Hence, from a historical point of view this empirical coincidence appears as an ANOMALY - as a fact that cannot be explained in any way by the running paradigm. The Answers given in this website do reflect this epistemological feature in an almost idealized way. MOSHE f.e. has compared it with a hotel shower that changes abruptly from being too hot or too cold, but is never quite comfortable. As this empirical coincidence refers obviously to the ultimate boundaries of our universe, its solution resp.explanation can possibly not be found within the world, but outside from it. HELMUT - +1 because there is a precise reference to a published paper. – joseph f. johnson Feb 8 '12 at 16:52 Mach's principle is simply a philosophical ancestor of the equivalence principle: matter tells geometry how to curve, geometry tells matter how to move. So, yes, you can have Minkowski as a solution for the vacuum Einstein equations, but the minute you introduce even the tiniest mass your solution is no longer Minkowski. You might point out that asymptotically the spacetime will still be flat and Minkowski. However, in GR (and in QFT) it is not only the local geometry that matters. Point is that there is no interesting geometry which does not also contain some matter. Mach's principle has been interpreted in many different ways. I find the following Wikipedia definition to be one I'm most comfortable with: A very general statement of Mach's principle is "Local physical laws are determined by the large-scale structure of the universe." Depending on what day of the week it is and whatever interpretation is your favorite for that day, Mach's principle could be "right", "wrong" or "obsolete". But if stated in the simple manner above, then it is nothing more than a restatement of the equivalence principle and the question of its "correctness" is no longer an issue. - Absolutely. The right question is not whether it is right or wrong, but whether it is useful or not. Clearly this used to be useful, as a way to converge on something more precise and correct. Is it useful now? I don't see how. – user566 Feb 20 '11 at 3:34 Thank you @Moshe. As for the usefulness of Mach's Principle I think that there is an argument to be made in its favor, but that's for another time. – user346 Feb 20 '11 at 8:34 2 Dear @Moshe, I would respectfully disagree. In fundamental physics, the ultimate question is always whether XY is right or wrong, not whether it is useful - which is left to managers and perhaps engineers. The observation that Mach's principle is no longer "useful" doesn't mean that we can't answer the question whether it's right or wrong. Yes, we can. – Luboš Motl Feb 20 '11 at 8:52 @Deepak, the variation of Mach's principle you wrote, "Local physical laws are determined by the large-scale structure of the universe," is in no way equivalent to the equivalence principle. Quite on the contrary, the equivalence principle says that the local physical laws in a freely falling frame are completely unaffected by any distant objects - GR is a method to impose locality in a manifest way. Your (or your quote from Wikipedia) version of Mach's principle has undoubtedly been shown invalid, too. In GR, the large structure is determined by local physics and local sources, not vice versa – Luboš Motl Feb 20 '11 at 8:54 @Lubos the proposition->*proof*->*proposition* style of physics that you advocate is not really the universal choice. There are a lot of known factors and also unknown factors which help prop up any great idea. Not all these can assigned an "absolute Truth value". Mach's principle happens to be one of these auxiliary ideas which form the foundation of GR. To attempt to separate Mach's Principle from the historical foundations of General Relativity is a lost cause. It was mixed in with the concrete. – user346 Feb 20 '11 at 9:30 show 1 more comment It is not really a case of Mach principle being wrong. It is something related to general relativity, and frame dragging or Lense-Thirring effect are similar to Mach’s principle. However, these are local laws of physics and Mach’s principle is a global hypothesis. Gravity does tell us that spacetime curvature induces motion of masses, and if these masses are large enough that can in turn change curvature. So gravity or spacetime curvature acts on a mass locally and the same experiences acceleration as seen from another frame. This is a geodesic deviation which is defined by the Riemann curvature. So this tells us the mass which is observed to accelerate is the same mass which responds to the curvature --- the equivalence principle. This also sounds similar to Mach’s principle: Inertia there determines inertia here. Mach stated the centripetal force on a local rotating frame is equivalent to what happens if the frame is nonrotating and the entire universe rotates around the frame. Kurt Godel worked out a model of a rotating universe and found a very strange time-looping spacetime. If the universe rotates it has a structure which is radically different from Mach’s original conjecture. Further, this universe violates energy conditions. It is not a physical spacetime, even if it comes from the Einstein field equations. This does at least suggests there are some radical departures from Mach’s principle and general relativity. Mach’s principle does not appear to be a consequence of general relativity. Mach’s principle is rather vague in some respects. The inertia of a body is said to be determined by the inertia (or masses) of all other bodies. Again this sounds like GR, but this extends across the universe. The actual gravitational interaction a particle here has with a galaxy at z = 8 around12.5 billion light years out is miniscule. So the interaction picture appears funny. It might be tempting to say the entire universe is some sort of single quantum wave function, and single masses we observe are entangled subsets. This might sound global and gets away from the $q/r^2$ drop off of gravity. From there one might be tempted to say inertial is inherited this way. However, entanglements do not involve forces, and inertia as an entanglement not defined. I will also say that I suspect Mach had ideas of an ether in space when he suggested this. So he probably had some picture of a rotating frame moving through this medium, and this is somehow equivalent to the whole medium rotating around the frame. So the ether vortex in the case of a rotating world is what generated the centrifugal force on the rotating frame. - The Mach's principle in short is that inertia is not absolute, determined by the matter configuration, and there is no other source. This can be summed up in two statements: 1. Massive bodies can affect inertia of other objects (inertia is determined by the matter configuration) 2. The Universe does not rotate and linearly accelerate as a whole (there is no other source of inertia) The first statement is included in GR and has been proven with the experimental measurement of frame-dragging. In fact inertia even can be screened so people in a rotating (against distant stars) spaceship could not detect the rotation. The second statement is still not proven (in the rotational part) although verified to a high precision. The linear part simply follows from the conservation of momentum. - The honest answer should be no body knows for sure. Einstein was inspired initially by this principle when he was formulating GR. However it gradually became clear that there are solutions to GR field equations which does not confirm to Mach's principle. Einstein was thoroughly disillusioned in his latter years. As a matter of fact GR has some features which are definitely "Machian" but other features which are "non Machian". Whether this principle is right or wrong has been always subject of controversy. The present opinion of most experts seem to be totally negative about the Mach principle. - 6 Actually, even if we have an ultimate theory of physics, the question of whether Mach's principle is true won't be settled. The main problem is that the principle itself is vague, not that we don't know the physics. – Ted Bunn Feb 19 '11 at 17:05 @Ted: Nevertheless, many great minds have been intrigued by this principle. – user1355 Feb 19 '11 at 17:11 1 Agreed! There's no doubt that Mach's principle is of historic importance and is fun and salutary to think about, even if it's not sufficiently well-defined to admit of being "true" or "false". – Ted Bunn Feb 19 '11 at 17:29 @Ted Bunn --yes but aren't there some recent and planned experiments to test variants of it? – Gordon Feb 19 '11 at 17:31 3 Dear Gordon, there are no serious physicists who want to test Mach's principle in 2011. Such things were being solved by Einstein in 1911. – Luboš Motl Feb 19 '11 at 17:49 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9542334675788879, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/73741?sort=oldest
## What does the “category” of $(\infty,1)$ category look like. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) One knows that in higher category theory, the category of $(\infty,n-1)$ categories is naturally an $(\infty,n)$ category ,(I use the word category to mean category in the correct weakened sense). When the category of $(\infty,1)$ categories is regarded as a weakened kan complex, we may regard these objects as a full subcategory of simplicial sets. This is a category in the strict sense. One ought to expect that associativity of the maps between the weakened kan complex be some sort of weakened associativity. The question is: is this weakened associativity there, and if so how is it understood? - ## 1 Answer You can see the collection of $(\infty,1)$-categories as forming themselves an $(\infty,1)$-category, which is sufficient to see where weak associativity shows up: There are many models for the intuiti9ve concept of $(\infty,1)$-category, the simplest is that of a usual 1-category endowed with a class of weak equivalences (see Barwick/Kan's "Relative Categories: Another model for the homotopy theory of homotopy theories" here). With the weak Kan complexes - together with the notion of equivalence between them - you happen to have found a strictly associative model for the $(\infty,1)$-category of $(\infty,1)$-categories. You can transform it into different other models, e.g. into simplicially enriched categories or quasicategories or Segal categories, as exposed e.g. in Bergner's "Three models for the homotopy theory of homotopy theories" (available here). The Segal category and the quasicategory of $(\infty,1)$-cats do no longer have strict associativity and the fact that they are equivalent descriptions of the $(\infty,1)$-cat of $(\infty,1)$-cats reflects that the strict associativity in your model was an accident and not an essential feature... Edit (in response to the comment) About the significance of having a strict model: Well, different models have different advantages. Your strict one is certainly good for computing compositions of functors between $(\infty,1)$-cats. The quasicategory of $(\infty,1)$-cats on the other hand is e.g. a better model to relate the $(\infty,1)$-cat of $(\infty,1)$-cats to other $(\infty,1)$-cats - examples are the relations between the quasicategories of (small) $(\infty,1)$-cats, presentable $(\infty,1)$-cats and stable $(\infty,1)$-cats given in Lurie's "Higher Topoi" and in DAG 1 (now "Higher Algebra"): There are $(\infty,1)$-adjunctions between these - e.g. between $(\infty,1)$-cats and stable $(\infty,1)$-cats given by taking spectra in an $(\infty,1)$-cat, forgetting the stability of a stable $(\infty,1)$-cat, respectively - and these facts would be hard to express using your model. - This is a nice answer. You went beyond answering my question. I wish that i could upvote this twice. One question that I have is on the significance of having a model with strict associativity. This seems to say that the theory itself can be simplified somewhat. – Lunasaurus Rex Aug 27 2011 at 7:46 2 It is not that significant in itself. One of the equivalent models of all $(\infty, 1)$-categories is strict categories enriched in simplicial spaces (even Kan complexes). So this means that EVERY $(\infty,1)$-category has a strictly associative model. (As Peter rightly notes though, computing with the strict model is difficult.) – Chris Schommer-Pries Aug 28 2011 at 14:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9283132553100586, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/14095/when-to-use-students-or-normal-distribution-in-linear-regression
# When to use Student's or Normal distribution in linear regression? I am looking at some problems, and in some, to test the coefficients, sometimes I see people using Student's distribution, and sometimes I see Normal distribution. What is the rule? - 3 – cardinal Aug 10 '11 at 16:57 ## 2 Answers The normal distribution is the large sample distribution in many meaningful statistical problems that involve some version of the Central Limit Theorem: you have (approximately) independent pieces of information that are being added up to arrive at the answer. If parameter estimates are asymptotically normal, their functions will also be asymptotically normal (in regular cases). On the other hand, the Student $t$ distribution is derived under more restrictive conditions of i.i.d. normal regression errors. If you can buy this assumption, you can buy the $t$-distribution being used for testing hypothesis in linear regression. The use of this distribution provides wider confidence intervals than the use of the normal distribution. The substantive meaning of that is that in small samples, you need to estimate your measure of uncertainty, the regression mean squared error, or the standard deviation of residuals, $\sigma$. (In large samples, you kinda have as much information as if you knew it, so the $t$-distribution degenerates to the normal distribution.) There are some occasions in linear regression, even with finite samples, where the Student distribution cannot be justified. They are related to violations of the second order conditions on regression errors; namely, that they are (1) constant variance, and (2) independent. If these assumptions are violated, and you correct your standard errors using Eicker/White estimator for heteroskedastic, but independent residuals; or Newey-West estimator for serially correlated errors, or clustered standard errors for cluster-correlated data, there is no way you can pull a reasonable justification for Student distribution. However, by employing an appropriate version of asymptotic normality argument (traingular arrays and such), you can justify the normal approximation (although you should have in mind that your confidence intervals would very likely be too narrow). - 1 (+1) I love the implication, in the opening of the third paragraph, that linear regression is done with infinite (non-"finite") samples! – whuber♦ Aug 10 '11 at 17:11 – StasK Aug 10 '11 at 17:49 I like the representation of the student t distribution as a mixture of a normal distribution and a gamma distribution: $$Student(x|\mu,\sigma^2,\nu)=\int_{0}^{\infty}Normal\left(x|\mu,\frac{\sigma^2}{\rho}\right)Gamma\left(\rho|\frac{\nu}{2},\frac{\nu}{2}\right)d\rho$$ Note that the mean of the gamma distribution is $E[\rho|\nu]=1$ and the variance of this distribution is $V[\rho|\nu]=\frac{2}{\nu}$. So we can view the t-distribution as generalising the constant variance assumption to a "similar" variance assumption. $\nu$ basically controls how similar we allow the variances to be. You also view this as "random weighted" regression, for we can use the above integral as a "hidden variable" representation as follows: $$y_i=\mu_i+\frac{e_i}{\sqrt{\rho_i}}$$ Where $e_i\sim N(0,\sigma^2)$ and $\rho_i\sim Gamma\left(\frac{\nu}{2},\frac{\nu}{2}\right)$ all variables independent. In fact this is basically just the definition of the t-distribution, as $Gamma\left(\frac{\nu}{2},\frac{\nu}{2}\right)\sim \frac{1}{\nu}\chi^2_\nu$ You can see why this result makes the student t distribution "robust" compared to the normal because a large error $y_i-\mu_i$ can occur due to a large value of $\sigma^2$ or due to a small value of $\rho_i$. Now becuase $\sigma^2$ is common to all observations, but $\rho_i$ is specific to the ith one, the general "common sense" thing to conclude is that outliers give evidence for small $\rho_i$. Additionally, if you were to do linear regression $\mu_i=x_i^T\beta$, you will find that $\rho_i$ is the weight for the ith observation, assuming that $\rho_i$ is known.: $$\hat{\beta}=(\sum_i\rho_ix_ix_i^T)^{-1}(\sum_i\rho_ix_iy_i)$$ So an outlier constitutes evidence for small $\rho_i$ which means the ith observation gets less weight. Additionally, an small "outlier" - an observation which is predicted/fitted much better than the rest - constitutes evidence for large $\rho_i$. Hence this observation will be given more weight in the regression. This is in line with what one would intuitively do with an outlier or a good data point. Note that there is no "rule" for deciding these things, although mine and others response to this question may be useful for finding some tests you can do along the finite variance path (student t is infinite variance for degrees of freedom less than or equal to two). - +1: this looks right, but I don't think you should say a mixture of a normal and a gamma distribution, but rather a normal-gamma–normal compound distribution and motivate this construction by saying that the normal-gamma distribution is the conjugate prior of the normal distribution (parametrized by mean and precision). – Neil G Mar 28 '12 at 12:40 Yeah, point taken about the mixture - although I can't think of a non-clumsy way to correct it right now. Note that this form is not unique to conjugate distributions - for example if we replace the gamma pdf with an inverted exponential pdf, we get the laplace distribution. This leads to "least absolute deviations" instead of least squares as a form of robustifying the normal distribution. Other distributions would lead to other "robustifications" - perhaps not as analytically pretty as student t though. – probabilityislogic Mar 28 '12 at 13:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325107336044312, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/189947/when-proving-question-got-double-turnstile-symbol?answertab=votes
# when proving question got double turnstile symbol For example: Prove that (something)⊨(another thing) Is it the same as "Prove that (something)⊢(another thing)"? The single turnstile symbol always appears during sample proofs in my lecture notes. Yet my homework question suddenly got the double turnstile symbol, am I supposed to take it as a single turnstile symbol and do syntactical proving using natural deduction? Thanks! - 2 $\vDash$ usually refers to semantic entailment. In the presence of a soundness theorem, $P \vdash Q$ implies $P \vDash Q$. – Zhen Lin Sep 2 '12 at 9:47 2 The person in the best place to answer your question is the person who assigned the homework. – Gerry Myerson Sep 2 '12 at 11:26 1 – MJD Sep 2 '12 at 16:17 ## 1 Answer $\vDash$ stands for semantic truth rather than provabilty. It has two common uses: • $M\vDash\phi$ where $M$ is a structure, means that formula $\phi$ is always true in $M$. (For ordinary first-order logic, $M$ would consist of a non-empty universe plus concrete realizations for all functions and predicates in the language of $\phi$. For other logics it may be a stranger beast, such as a Kripke structure). • $T\vDash\phi$ where $T$ is a theory, means that $M\vDash\phi$ for every structure $M$ that satisfies the axioms of $T$. The soundness and completeness properties of a formal system state that $T\vDash\phi$ if and only if $T\vdash\phi$ -- but if you're being asked specifically to argue for $T\vDash\phi$, you're probably supposed to do it by arguing more explicitly at the semantic level. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345988631248474, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=3850309
Physics Forums ## Why does a^x = e^(x(lna)) Hi every one, first post, so let me know if I'm not following any of the rules. I'm studying Calculus, looking at the rules for deriving the function a^x. The first step is to change a^x to e^(x(lna)). From there, it's easy to use the chain rule to find the derivative. Why can you do that first step though? I've tried googling around, and can't find an explanation. Also, any tips on doing google searches for this kind of topic? I've tried pasting the equation into google; doing searches for "natural log" guides, "e" guides, and a browsed a few precalculus sites, but haven't found the answer I'm looking for. Thanks! Blog Entries: 1 Recognitions: Homework Help ax=(eln(a))x Do you see what to do from there? Ah! I get it now. (elna) is equal to a and (ab)c = ab*c so ax = (eln(a))x = eln(a)*x Thanks for the super fast reply! I feel silly for not figuring that out sooner. ## Why does a^x = e^(x(lna)) EDIT You got it before I typed this I think this is right, I'm just trying to remember it off the top of my head as my textbook is in school. Let the value of $a^{x}$ be equal to $y$ $a^{x} = y$ Take natural log of both sides $ln(a^{x}) = ln(y)$ Then we can bring the exponent out of the bracket $x * ln(a) = ln(y)$ Then we put both sides as the power of e to cancel the ln on the right $e^{x * ln(a)} = e^{ln(y)}$ $e^{x * ln(a)} = y$ Then since $a^{x} = y$ we sub that in for y and get $e^{x * ln(a)} = a^{x}$ Recognitions: Gold Member Science Advisor Staff Emeritus Another way to see the same thing is to note that $aln(x)= ln(x^a)$ so that $e^{xln(a)}= e^{ln(a^x)}$. Then, because "$f(x)= e^x$" and "$g(x)= ln(x)$" are inverse functions, $e^{ln(a^x)}= a^x$. Thread Tools
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357059597969055, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/8813/how-do-i-derive-the-critical-temperature-for-bose-condensation-in-two-dimensions
# How do I derive the critical temperature for bose condensation in two dimensions? In class we derived the 3D case, but there's a step I don't understand: $$N = g \cdot {V \over (2 \pi \hbar)^3} \cdot \int\limits_{0}^{\infty}{1 \over{e^{\left( E_p \over{K_B T}\right)}-1}} d^3 p = g \cdot {V \over (2 \pi \hbar)^3} \cdot 4 \pi \cdot \int\limits_{0}^{\infty}{p^2 \over{e^{\left( E_p \over{K_B T}\right)}-1}} dp$$ ... I feel like if I knew why that step made sense, I could figure out how to do the equivalent thing for the 2D case, but I'm stuck on that. - 3 you are just going from cartesian to spherical coordinates, $(p_x,p_y,p_z)\rightarrow(p,\theta,\phi)$, which has a jacobian of $p^2 \mbox{cos}\theta dp d\theta d\phi$. The integrand only depends on the radial coordinate, so the solid angle part integrates to $4\pi$. – wsc Apr 18 '11 at 2:34 @wsc I'd post that as an answer as it probably answers the question satisfactorily :) – Lagerbaer Apr 18 '11 at 3:20 Thanks @wsc, I thought it had something to do with that. Since I'm having trouble, I'm going to walk through it 'out loud' here... If $E_p$ is only a function of $p_{\rho}$, it stays unchanged, and the $p^2$ in the numerator could be called $p_{\rho}^2$. The limits for $\rho$ remain $0$ and $\infty$, but the limits for $\theta$ and $\phi$ are reduced to their meaningful range. Using the notation more familiar to me, $\theta$ and $\phi$ factor out to $\int_0^{2\pi}d\theta\cdot\int_0^{\pi}\mathrm{sin}(\phi)d\phi=2\pi\cdot2$ – Polyergic Apr 18 '11 at 3:43 ... applying that to the 2D case, converting to polar coordinates, the Jacobian is $r$, $\theta$ factors out into $\int_0^{2\pi}d\theta=2\pi$, the $r$ really means $p_r$, which is just $p$, and the whole thing goes like this: $$N = g \cdot {A \over (2 \pi \hbar)^2} \cdot \int\limits_{0}^{\infty}{1 \over{e^{\left( E_p \over{K_B T}\right)}-1}} d^2 p = g \cdot {A \over (2 \pi \hbar)^2} \cdot 2 \pi \cdot \int\limits_{0}^{\infty}{p \over{e^{\left( E_p \over{K_B T}\right)}-1}} dp$$ That makes sense to me, now I can go trip over the next step... Thanks again @wsc! – Polyergic Apr 18 '11 at 3:54 1 @Polyergic, that's exactly right. @Lagerbaer, I try to avoid submitting 2-sentence responses as answers, but I suppose there isn't much more to say here. :) – wsc Apr 18 '11 at 3:54 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9595915675163269, "perplexity_flag": "head"}
http://mathoverflow.net/questions/24281?sort=votes
## Current status of Bloch Constant and Landau Constant bounds ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The Bloch constant B (based on a theorem introduced by André Bloch in 1925 on the maximum radius of a one-to-one disk in the image of a normalized analytic function of the unit disk, see for instance Remmert Funktionentheorie II or Steven Finch marvelous "Mathematical Constants") was conjectured by Ahlfors to be $$\frac{1}{\sqrt{1+\sqrt{3}}}\frac{\Gamma(\frac{1}{3})\Gamma(\frac{11}{12})}{\Gamma(\frac{1}{4})}$$ (This value, if I remember well Ahlfors' article corresponds to a particular function that he constructed for this purpose). The Bloch Constant $B$ is currently known to be at least slightly greater than $\frac{\sqrt{3}}{4}$ (several articles improving upon each other by Mario Bonk, Chen and Gauthier, Xiong). Has there been some progress since 1998 on the lower bound ? Same question for the closely related (univalent) Landau constant (quite often called Bloch-Landau constant, sometimes seen as $B_\infty$) ? The conjectured upper bound is $$\frac{\Gamma(\frac{1}{3})\Gamma(\frac{5}{6})}{\Gamma(\frac{1}{6})}$$ What can be said of the various adaptations or specializations of this constant to various class of functions, and extensions of these constants to several complex variables or other functional spaces ? I give as background the original article from Bloch, Ahlfors and Grunsky. (1) A. Bloch, Les théorèmes de M. Valiron sur les fonctions entières et la théorie de l'uniformisation, Ann. Fac. Sci. Univ. Toulouse, vol. 17, (1925), pp1-22. (2) L. V. Ahlfors and H. Grunsky, Über die Blochsche Konstante, Math. Zeitschrift 42 (1937), pp671–673. (3) L. V. Ahlfors, An extension of Schwarz's lemma, Trans. Amer. Math. Soc. 43 (1938), pp359–364. (these two are reprinted in Ahlfors Works vol 1) Ahlfors life and works are evocated in an AMS Notices of 1998. - I am not an expert but perhaps this paper could be of some interest: R. Rettinger, On the computability of Bloch's constant ( dx.doi.org/10.1016/j.entcs.2008.03.024) – mathphysicist May 12 2010 at 0:04 And another paper that could be of interest: B. Skinner, The univalent Bloch constant problem. Complex Var. Elliptic Equ. 54, No. 10, 951-955 (2009). DOI: dx.doi.org/10.1080/17476930903197199 Summary: Suppose f∈S and |f(z)|≥B(|z|), where B is a nonnegative function on [0, 1). We present a theorem which provides an implicit function C(|z|) such that |f(z)|≥C(|z|)≥B(|z|). We use this theorem to obtain an explicit improvement in the lower bound for the univalent Bloch constant to 0.5708858. – mathphysicist May 12 2010 at 0:18 @mathphysicist: thanks, especially for the second reference which is about a lower bound to the Bloch-Landau constant. I then found a paper: euclid.ucc.ie/pages/staff/carroll/Papers/… by Tom Carrol and Joaquim Ortega-Cerda on an upper bound for L giving 0.6563937. They use a construction with a threefold symmetry which recalls Ahlfors construction. I will try to add more background about L in the original question. – ogerard May 12 2010 at 5:53 ## 1 Answer The world record on the Bloch's constant seems to be MR1690898 by C. Xiong, who proved $B\geq \sqrt{3}/4+3.10^{-4}$. I recall that $\sqrt{3}/4$ is the Ahlfors estimate, then Heins proved that $B$ is strictly greater than that, and Bonk was the first to prove this with a specific constant. Then this constant was slightly improved, first by Gauthier and Chen MR1428103 and then by Xiong. This is far from the conjectured value. There are also results which show that the Ahlfors - Grunsky conjectured extremal function gives a local extremum for certain variations. But the classes of variations considered are narrow. For example, Baernstein II and Vinson proved that the Ahlfors Grunsky function gives a local extremum for the class of ramified coverings which are ramified only over some lattice points. (So the hexagonal lattice is locally extremal for such restricted problem). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8739995360374451, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/4557-law-cosine-help-print.html
# Law of cosine Help!! Printable View • July 29th 2006, 12:31 PM ^_^Engineer_Adam^_^ Law of cosine Help!! Cud u give me the formula of Law of Cosine on how to solve a triangle w/ 3 sides given? Im kind of confused with this problem: a = 5, b = 6, c = 9 find all 3 angles A,B & C • July 29th 2006, 01:10 PM topsquark Quote: Originally Posted by ^_^Engineer_Adam^_^ Cud u give me the formula of Law of Cosine on how to solve a triangle w/ 3 sides given? Im kind of confused with this problem: a = 5, b = 6, c = 9 find all 3 angles A,B & C $c^2 = a^2 + b^2 - 2ab \cdot cos( \gamma )$ Where $\gamma$ is the angle across from side c. Thus $cos( \gamma ) = \frac{a^2 + b^2 - c^2}{2ab}$. The other formulae simply permute the values of a, b, and c and need not be given. So for example, to find the angle across from side c we have: $cos( \gamma ) = \frac{5^2 + 6^2 - 9^2}{2\cdot 5 \cdot 6} = -\frac{1}{3}$ Thus $\gamma$ is second quadrant and $\gamma \approx 109.5^o$ To find the angle across from side a, use a = 6, b = 9, c = 5. etc. -Dan • July 29th 2006, 05:16 PM ThePerfectHacker You can also find the angles by using the fact that. $\frac{1}{2}ab\sin \gamma =A$ where, $A=\mbox{ area }$ And you can calculate area by Heron's Formula. $A=\sqrt{s(s-a)(s-b)(s-c)}$ $s=\mbox{ semi-perimeter }$ • July 30th 2006, 04:05 AM Soroban Hello, Adam! Quote: Cud u give me the formula of Law of Cosine on how to solve a triangle w/ 3 sides given? Im kind of confused with this problem: a = 5, b = 6, c = 9 find all 3 angles A,B & C No, I'm confused . . . You're familiar with the Law of Cosines . . but you've never solved for an angle . . . ever? Okay, just this once . . . I assume you know that: . $a^2\:=\:b^2+c^2 - 2bc\cos A$ Rearrange the terms: . $2bc\cos A\:=\:b^2 + c^2 - a^2$ Divide by $2bc:\;\;\boxed{\cos A\:=\:\frac{b^2 + c^2 - a^2}{2bc}}$ . . . a formula for finding $\angle A.$ Similarly, we can derive formulas for the other two angles: . . $\boxed{\cos B\:=\:\frac{a^2+c^2-b^2}{2ac}}\qquad\boxed{\cos C\:=\:\frac{a^2+b^2-c^2}{2ab}}$ You should memorize these or be able to derive them when needed. ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Your problem has: . $a = 5,\lb = 6,\;c = 9$ We have: . $\cos A\:=\:\frac{6^2 + 9^2 - 5^2}{2(6)(9)}\:=\:\frac{92}{108}$ . . Therefore: . $A\:=\:\cos^{-1}\left(\frac{92}{108}\right)\:=\:31.5863381\quad \Rightarrow\quad \boxed{A\:\approx\:31.6^o}$ We have: . $\cos B\:=\:\frac{5^2 + 9^2 - 6^2}{2(5)(9)}\:=\:\frac{70}{90}$ . . Therefore: . $B\:=\:\cos^{-1}\left(\frac{7}{9}\right)\:=\:38.94244127\quad \Rightarrow\quad \boxed{B\:\approx\:38.9^o}$ We have: . $\cos C\:=\:\frac{5^2+6^2-9^2}{2(5)(6)}\:=\:\frac{-20}{60}$ . . Therefore: . $C\:=\:\cos^{-1}\left(-\frac{1}{3}\right)\:=\:109.4712206\quad \Rightarrow\quad \boxed{C\:\approx\:109.5^o}$ Check: . $A + B + C\:=\:31.6^o + 38.9^o + 109.5^o \:=\:180^o$ . . . Yay! • July 30th 2006, 05:20 AM ^_^Engineer_Adam^_^ Yea ... But the only confusing thing is that after solving the law of cosine to get the 1st angle A which is 32 degrees, i solve the angle using the law of sine... so sin32 / 5 = sin C / 9 and it gave the C an angle of 72.5 degrees... How come? Btw Thanks topsquart, ThePerfectHacker and thanks again Soroban!! • July 30th 2006, 06:03 AM Soroban Hello again, Adam! Quote: After solving the Law of Cosine to get $A = 32^o$ i solved the angle using the Law of Sines. So $\frac{\sin32}{5} = \frac{\sin C}{9}$ and it gave $C = 72.5^o.$ How come? You fell for a very common "trap" in these problems. Recall that an inverse sine can have two possible values. . . For example: . $\sin^{-1}(0.5)\,=\,30^o$ or $150^o$ And your calculator gives you only the smaller value. It is up to you to determine which angle is appropriate. You got: . $C = 72.5^o$, but $107.5^o$ is also possible. I've explained this to my students: "The Law of Sines is much easier to use for determining angles. But the Law of Sines (and your calculator) can lie to you. . . (It says the angle is $60^o$, but it's really $120^o.)$ Hence, I recommend that you use the Law of Cosines to find angles. . . (It doesn't lie.)" • July 30th 2006, 06:11 AM ^_^Engineer_Adam^_^ I c so u hav ta use cosine all the time involving 3 sides .... thanks master soroban!! i finally found the answer to my frustration... coz i got a 20 / 50 in a seatwork with that problem... really appreciated! All times are GMT -8. The time now is 05:03 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8714995384216309, "perplexity_flag": "middle"}
http://mathhelpforum.com/number-theory/96246-chinese-remainder-theorem.html
# Thread: 1. ## Chinese Remainder Theorem For the question, Find the smallest x which solves the system of congruence equations: x =+ 3 mod 15 x =+ 6 mod 224, where =+ stands for congruence modulo operation. I gave an argument oriented solution as follows. Any multiple of 224 is even. So, 6+ any multiple of 224 is even. => x is even. Also, any multiple of 15 ends with 5 or 0. 3+any multiple of 15 ends with 8 or 3. Since X is even, the number should end with 8. Since X = A number ending with 8, Also, X = A multiple of 224 + 6 This means the multiple of 224 ends with 2. Checking the multiples of 224, we get 224, 448, 672. The smallest one is 672 ==> X is 678. This solves both the equations as 678 = 224*3 +6 = 45*15 +3 My friend argued about a general solution for the same problem using Chinese Remainder Theorem. I browsed results for the same in various sites. I find only a general solution found in most cases. Can anyone describe how to get the particular solution for this problem through Chinese Remainder Theorem? Also, please mention if there are any mistakes in my argument-oriented solution. 2. yes you can.... so we have 3 mod 15 6 mod 224 gcd(224,15)=1 so we use chinese thm. first thing we need is integers a and b s.t... a15+b224=1 and after we find a and b using euclid's algorithm we compute 6a15+3b224 (mod 15.224=3360) euclid's algorithm gives a=15 and b=-1 so 6(15)15+3(-1)224=678 mod 3360 and adding and subtracting 3360 would give you many solutions. you can also use this method to solve many problems which your method would be increasingly difficult to use for example.... Find a number which is 3 mod 7, 2 mod 5 and 1 mod 2. 7,5 and 2 are coprime so use chinese thm... first concentrate on 3 mod 7 and 2 mod 5 find a,b so that a7+b5=1 a=-2 and b=3 using euclid compute 2a7+3b5 (mod 7.5=35)=2(-2)7+3(3)5=17 mod 35 now do the same with 17 mod 35 and 1 mod 2 i need to compute 1a35+17b2 where a35+b2=1 a=1 and b=-17 so 1a35+17b2=17 mod 70=87mod70 Note: the reason why we need the gcd to be 1 is because the chinese remainder theorem is based on this. but you have a nice little argument there, nice one. 3. Here's a similar argument: x= 6 mod 224 so x= 224k+ 6 for some integer k. But 224= 14 mod 15 so that is the same as x= 14k+ 6 mod 15. We have x= 14k+ 6= 3 mod 15 or 14k= -3 mod 15 so 14k= 15j- 3 for some integer j. That is, 15j- 14k= 3. It is obvious that 15- 14= 1 (shortcutting the Euclidean division algorithm!) so 15(3)- 14(3)= 3. k= j= 3 is a solution. Taking k= 3, x= 224(3)+ 6= 678. But if you can come up with a valid argument of your own, that's always best. 4. @Krahl Thanks for your descriptive reply for the question; I get the solution part right till the general solution; You had mentioned that adding and subtracting 3360 would yield other solutions; It is this part where I do mistakes some times. Can you please throw some more light on one or two of the particular solutions? MAX, 5. @HallsofIvy That's another interesting aliter for the same problem; Good one 6. ## CRT I was just thinking it might actually help you to see the actual proof of the theorem so you can see why these solution methods actually work. I am not sure if you are familiar with rings and ideals and stuff, so I will leave it in terms of the integers, but if you are interested, the same proof idea works in general. You assume that (m,n)=1 (this is enough to show in a ring (m)+(n)=R). Then you have the following isomorphism: $\frac{\mathbb{Z}}{mn\mathbb{Z}}\cong \frac{\mathbb{Z}}{m\mathbb{Z}} \times \frac{\mathbb{Z}}{n\mathbb{Z}}$ Here is the isomorphism: $\phi(x)=(x$ (mod m), $x$ (mod n)). It is a homomorphism: $\phi(x+y)=(x+y$ (mod m), $x+y$ (mod n)) $=(x$ (mod m), $x$ (mod n)) $+(y$ (mod m), $y$ (mod n))= $\phi(x)+\phi(y)$. It is pretty clear that $ker(\phi)=mn\mathbb{Z}$ So it remains to show that it is surjective, and this is the key part of the argument for what you are looking for. Let $(a$ (mod m), $b$ (mod n)) be an arbitrary element of $\frac{\mathbb{Z}}{m\mathbb{Z}} \times \frac{\mathbb{Z}}{n\mathbb{Z}}$ This is basically saying we need some integer that will satisfy $x\equiv a$ (mod m) and $x\equiv b$ (mod n). So this is the trick. Because we know (m, n) by the euclidean algorithm, there exists $s,t \in \mathbb{Z}$ such that $sm+tn=1$. Just a side note: In the more general setting you say that the ring contains a unit and since we have (m)+(n)=R, there exists $i\in (m)$ and $j\in (n)$ such that $i +j =1$. Actually the ring need not even be a PID, it is true for any ideals that are comaximal in a commutative ring. So back to the surjectivity. We want to sort of cross multiply here and take the value $bsm+atn$ and see that it maps where we want it. $\phi(bsm+atn)=(bsm+atn$ (mod m), $bsm+atn$ (mod n)) $=(atn$ (mod m), $bsm$ (mod n))= $(a(1-sm)$ (mod m), $b(1-tn)$ (mod n))= $(a-asm)$ (mod m), $b-btn$ (mod n))= $(a)$ (mod m), $b$ (mod n)). So this proves the isomorphism. This shows you $bsm+atn$ (mod mn) is the number you want to pick to solve the system of equations above. The fact that the kernel is $mn\mathbb{Z}$ just tells you that you can actually just add kmn for any integer k to your solution and you will get all the solutions to this system of equations. 7. Originally Posted by MAX09 @Krahl Thanks for your descriptive reply for the question; I get the solution part right till the general solution; You had mentioned that adding and subtracting 3360 would yield other solutions; It is this part where I do mistakes some times. Can you please throw some more light on one or two of the particular solutions? MAX, Hi max not sure what part of the solution you're asking for but i'll try writing a few things on the solutions. well you agree that 678 is a solution right? Now the useful thing about using the chinese rem thm is that you end up not just with the answer 678 but also mod 3360. This mean that any number which is modulo 3360 would satisfy your equations, for example if we add and subtract 3360 we get the following alternative solutions; ........, -2682,678,4038,7398......., Now this means that all these numbers are 678 modulo 3360 i.e. have remainder 678 when divided by 3360. so 4038=3360+678, 7398=3360*2+678 etc... And the theorem claims that all these solutions also satisfy, so let's check them; 7398=15*493+3 so it is 3 mod 15 7398=224*33+6 so it is 6 mod 224 That is why the question asks for the smallest positive solution. I did the same with the other example and got the solutions 17 mod 70 and 87 mod 70. Also notice this step i did; 6a15+3b224 its 6 mod 224 that is why 6 is next to 15. since 224 = 0 mod 224 so that you are left with 6a15 + 0 mod224 and so a*15 must give me 1 mod 224 so that 6a15+0 mod 224=6 mod 224 and same with mod 15. any problems dont hesitate to post 8. @Gamma That wasn't exactly the procedure I was looking for. Anyways, thanks for the effort.. It helped understanding the relation the CRT had with isomorphisms. Thanks a bunch !!! 9. ## @krahl Yes, your posted help me understand the procedure completely I had to get my basics on the Extended Euclidean Algorithm right. The following url helped me. http://marauder.millersville.edu/~bikenaga/absalg/euc/euclidex.html Thanks again, Krahl!! Max
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8937405943870544, "perplexity_flag": "middle"}
http://climateaudit.org/2007/01/06/paul-linsays-poisson-fit/
by Steve McIntyre Paul Linsay's Poisson Fit Paul Linsay contributes the following: Using Landsea’s data from here, plus counts of 15 and 5 hurricanes in 2005 and 2006 respectively, I plotted up the yearly North Atlantic hurricane counts from 1945 to 2004 and added error bars equal to $\pm \sqrt{count}$ as is appropriate for counting statistics. The result is in Figure 1. Figure 1. Annual hurricane counts with statistical errors indicated by the red bars. The dashed line is the average number of hurricanes per year, 6.1. There is no obvious long term trend anywhere in the plot. There is enough noise that a lot of very different curves could be well fit to this data, especially data as noisy as the SST data. I next histogrammed the counts and overlaid it with a Poisson distribution computed with an average of 6.1 hurricanes per year. The Poisson distribution was multiplied by 63, the number of data points so that its area would match the area of the histogrammed data. The results are shown in Figure 2. The Poisson distribution is an excellent match to the hurricane distribution given the very small number of data points available. I should also point out that I did no fitting to get this result. Figure 2. Histogram of the annual hurricane counts (red line) overlaid with a Poisson distribution (blue line) with an average of 6.1 hurricanes per year. I conclude from these two plots that 1. The annual hurricane counts from 1945 through 2006 are 100% compatible with a random Poisson process with a mean of 6.1 hurricanes per year. The trends and groupings seen in Figure 1 are due to random fluctuations and nothing more. 2. The trend in Judith Curry’s plot at the top of this thread is a spurious result of the 11 year moving average, an edge effect, and some random upward (barely one standard deviation) fluctuations following 1998. Like this: This entry was written by , posted on Jan 6, 2007 at 1:44 PM, filed under Hurricane. Bookmark the permalink. Follow any comments here with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL. 188 Comments 1. Steve McIntyre Solow and Moore 2000, cited by Roger Pielke, also fitted a Poisson model to hurricane data and concluded that there was no trend to the hurricane data that then had accrued. The 2005 hurricane does appear loud to me in Poisson terms but then so would be 1933 and 1886, so the process may be a bit long-tailed. 2. TAC Steve, nice job! I question the meaning of “error bars” in the first figure. Ignoring the undercount issues, we know the values exactly. The second graph makes the point that the data could have come from a Poisson population. Steve: TAC, you mean, “nice job, Paul “ 3. Posted Jan 6, 2007 at 2:25 PM | Permalink | Reply added error bars equal to +- sqrt(count) as is appropriate for counting statistics Not very familiar with this, any reference for layman? 4. Pat Frank #3 — The error bars in Figure 1 assume a completely random process. That would be the null assumption (no deterministic drivers). The Poisson plot shows that the system has a driver, but is a random process within the bounds determined by the driver. It’s a lovely result, Paul, congratulations. You must have laughed with delight when you saw that correlation spontaneously emerge, and with no fitting at all. That feeling is the true reward of doing science. I expect Steve M. has experienced that, too, now. In a strategic sense, your result, Paul, shows that a large fraction of the population of climatologists have a pre-determined mental paradigm, namely AGW, and are looking for trends confirming that paradigm. They have gravitated toward analyses — an 11-year smoothing that produces autocorrelation, for example — that produce likely trends in the data. These are getting published by editors who also accept the paradigm and so accept unquestioningly as correct the analyses that support it. Ralph Ciccerone’s recent shameful accomodation of Hansen’s splice at PNAS is an especially obvious example of that. These are otherwise good scientists who have decided they know the answer without actually (objectively) knowing, and end up enforcing only their personal certainties. Honestly, your result deserves a letter to the same journal where Emanuel published his trendy (in both senses) hurricane analysis. Why not write it up? It’s clearly going to take outside analysts to bring analytical modesty back to the field. Being shown wrong is one thing in science. Being shown foolishly wrong is quite another. Actually, now that I think about it, does the pre-1945 count produce a Poisson distribution with a different median? If so, you could show that, and then include Margo’s correction of the pre-1945 count, add the corrected count to your data set and see if the Poisson relationship extends over the whole set. Co-publish with Margo. It will set the whole field on its ear. Plus, you’ll have a really great time. 5. Pat Frank #4 “Emanuel” — that should have been Holland and Webster (Phil. Trans. Roy. Soc. A) — but then, your result deserves a wider readership than that. 6. Jean S Nice job, Paul! re #4: Actually, now that I think about it, does the pre-1945 count produce a Poisson distribution with a different median? If so, you could show that, and then include Margo’s correction of the pre-1945 count, add the corrected count to your data set and see if the Poisson relationship extends over the whole set. Co-publish with Margo. It will set the whole field on its ear. Plus, you’ll have a really great time. For that, one could use, e.g., the test I referred here. Since people seem to have both interest and time (unfortunately I’m lacking both right now), just a small hint : I think there was the SST data available somewhere here. Additionally R users look here: http://sekhon.berkeley.edu/stats/html/glm.html and Matlab users here: http://www.mathworks.com/products/statistics/demos.html?file=/products/demos/shipping/stats/glmdemo.html 7. Posted Jan 6, 2007 at 3:38 PM | Permalink | Reply Too funny! I was just reading Jean S’s suggestion on the other thread. My precise thought was: I should correct my analysis for poisson distributions! 8. IL Paul, ha, like yesterday you beat me to it again. I plotted the data from Judith Curry’s link from yesterday and it is not quite such a good fit to a Poisson distribution as the graph you show above, but pretty good. To look at the correlation to a Poisson distribution I plot the fractional probability from the hurricanes per year (the number of years out of 63 that have a particular number of hurricanes in it divided by the total 63 years) against the probability for that particular number of hurricanes from a Poisson distribution with that mean value. Then a perfect correlation would be a straight line of gradient 1. An analysis of the error bars on that graph shows a good correlation to a Poisson distribution with R2~0.7 but the error bars are so large (a consequence of the small numbers) that it very difficult to rule anything in or out. #2 TAC, the words ‘error bars’ here are not ‘errors’ in the sense of measurement errors. The assumption here is that the number of hurricanes is the actual true number with no undercounting or any other artifacts distorting the data (as if you had god-like powers and could count every one infallibly), but if you have a stoichastic process (something that is generated randomly) then just because you have 5 hurricanes in one year, then even if in the next year all the physical parameters are exactly the same, you may get 3, or 8. Observed over many years you will get a spread of numbers of hurricanes per year with a certain standard deviation. That perfectly natural spread is what has been referred to as the ‘error bar’. It is a property of a Poisson distribution that if you have N discrete counts of something, then the standard deviation of the distribution is equal to the square root of N. Try Poisson distribution on Wikipedia. As N becomes larger and larger, the asymmetry reduces and it looks much more like a normal distribution. But where you have a small number of discrete ‘things’ – as here with small numbers of hurricanes each year, then the distribution is asymmetrial (because you can’t have less than 0 storms per year). I would say that the excellent fit to a Poisson distribution shows that hurricanes are essentially randomly produced and what is more, the small number of hurricanes per year (in a statistical sense) makes determining trends statistically nonsense unless you have vastly more data (vastly more years). As I noted in the other thread yesterday, concluding that hurricanes are randomly produced does NOT preclude a correlation with SST, AGW, the Stock Market, marriages in the Church of England or anything else. If there is a correlation with SST over time, for the sake of argument, what you would see is a Poisson distribution in later years with a higher average than the earlier years. What always seemed to have been omitted in these arguments previously like in the moving 11 year average by Judith Curry is that the error bars (natural limitations on confidence) are very large. How certain you can be about whether that average really has gone up or not, is almost non-existant on this data. You might suspect a trend but can’t on these numbers show a significant increase with any level of confidence. By the same token, Landsea’s correction is way down in the noise. 9. TAC #3 Steve and Paul, I apologize for mixing up your names. Oops! #4 Re error bars: By convention these are used to indicate uncertainty corresponding to a plotted point. In this case, there is no uncertainty (again, of course, ignoring the fact that there is uncertainty because of the undercount!). Thus the error bars should be omitted from the first figure. Going out on a limb, I venture to say that the error bars were computed to show something else: That each observation is individually “consistent with” the assumption of a Poisson df (perhaps with lambda equal to about 6). Anyway, it appears that each error bar is computed based only on a single datapoint (N=1). This procedure results in 62 distinct interval estimates of lambda. However, it is not clear at all why we would want these 62 estimates. The null hypothesis is that the data are iid Poison, so we can safely use all the data to come up with the “best” single estimate of lambda and then test H0 by considering how likely it is that we would have observed the 62 observations if H0 is true (e.g. with a Kolmogorov-Smirnov test). Finally, I agree with you that it is a “lovely result,” as you say 10. IL Argh, sorry TAC, just realised I misread the comment numbers and should have addressed my comments on Poisson statistics to UC in #3, not you. 11. Paul Linsay There was a link to hurricane data that was inadvertantly dropped. #3. UC, The best I can do for you is “Radiation Detection and Measurement, 2nd ed”, G. F. Knoll, John Wiley, 1989. Chapter 3 is a good discussion of counting statistics. Suppose you have a single measurement of N counts and assume a Poisson process. What is the best estimate of the mean? N. What is the best estimate of the variance? N. #4, Pat The Poisson plot shows that the system has a driver How does it do that? 12. Ken Fritsch The analyses all around here at CA on TC frequencies and intensities have been most revealing to me, but the results have not been all that surprising once I understood that the potentional for cherry picking (and along with the evidence of how poorly the picking process is understood by those doing it) was significantly greater than I would have initially imagined. What will be more revealing to me will be the reactions to these analyses. The analyses say be very careful how you use the data, but I fear the inclination is, as Pat Frank indicates, i.e. here is what we suspect is true and here is the analysis from these selected data to substantiate it. 13. IL #9 TAC (really this time), I have to disagree with you about the ‘error bars’ on the graph, they shoudn’t be omitted, they are vital. There may be no uncertainty in the measured number of storms in a year but what should be plotted is ‘confidence limit’ – error bar is perhaps a loaded term. If you could run the year over and over again then you would get a spread of values, that is the meaning of the uncertainty or ‘error bar’ plotted in the figure 1. 14. TAC #13 IL, do you agree that the same value of the standard deviation should apply to every observation? If not — and the graph clearly indicates that it does not — could you explain to me why? 15. Posted Jan 6, 2007 at 5:28 PM | Permalink | Reply There is a scientific response to this: “Ack!” The RealClimate response: “*sigh* so how much was Paul Linsay paid through proxies some laundered money from Exxon-Mobil to confuse the public with regard to the scientific consensus on global warming?” My response: “Holy crap, we’ve been trying to predict a random process all of this time. Someone should call Hansen and Trenberth and recommend a ouija board for their next forecast – unless they’re already using one” 16. IL #14, no, because the count in each year is an independent measurement and the years are not necessarily measuring the same thing. It is conceivable for example that a later year has been affected by a positive correlation with SST (or any other mechanism) so the two periods would not have the same average. If you have measured a count of N then the variance of that in Poisson statistics is N. If you hypothesize that a number of years are all the same so that you add all the counts together then you have a larger value of N but the fractional uncertainty is lower since it is the standard deviation divided by the value N but since the standard deviation is the square root of N then fractional uncertainty is (root N)/N = 1/(root N) and thus as N increases, the relative uncertainty decreases. If you treat each year as separate then I believe that what Paul has done in figure 1 is correct. 17. IL John A – just because it is a random process does not mean that there cannot be a correlation with SST, AGW, the Stock Market, marriages in the Church of England or anything else #8 Its quite reasonable to look to see if there is a correlation with time, SST or whatever. I have no problems with that. What I do think though is that the statistics of these small numbers make the uncertainties huge so that the amount of data you need to be able to confidently say that there is a real trend is far larger than is available. You would need many years’ more data to reduce the uncertainties – or if there was a real underlying correlation with SST – or whatever – it would have to be much more pronounced to stick up above the natural scatter. 18. TAC Il, each year is an independent measurement and the years are not necessarily measuring the same thing Well, under the null hypothesis the “years” are measuring the same thing. Each one is an iid variate from the same population. The variates (not the observations) have the same mean, variance, skew, kurtosis, etc. Honest! I’ll try to find a good reference on this and post it. 19. Jean S Elsner is using Poisson regression for Hurricane prediction: Elsner & Jagger: Prediction models for annual U.S. hurricane counts, Journal of Climate, v19, 2935-2952, June 2006. http://garnet.fsu.edu/~jelsner/PDF/Research/ElsnerJagger2006a.pdf He has some other interesting looking publications here: http://garnet.fsu.edu/~jelsner/www/research.html and a (recently updated) blog here: http://hurricaneclimate.blogspot.com/ 20. Posted Jan 6, 2007 at 6:12 PM | Permalink | Reply Re 17: IL, thanks for making me laugh. We’re now generating our own statistics-based humor on this blog. Re #11: Paul, what happens if you apply the same analysis to the global hurricane data? Now I’m curious. Because if the global data follows the same Poisson distribution then we’re looking at an even bigger delusion in climate science than temperatures in tree-rings. 21. Jos Verhulst What type of curve would arise with a Poisson constant gradually rising , for instance from alpha=5 in 1950 to alpha=7 in 2000 ? Would that curve be distinguishable from a Poisson graph with intermediate alpha, given the coarseness resulting from the fact that N = 63 only? 22. IL #20 John A – glad I have some positive effect #18 TAC, No, I don’t think so (although I am always aware of my own fallibilities and am willing to be educated). I think I understand the point you are making but I am not sure it is correct here in the way you imply. There are a lot of examples given in http://en.wikibooks.org/wiki/Statistics:Distributions/Poisson one example is going for a walk and finding pennies in the street. Suppose I go for the same walk each day. Many days I find 0 pennies, a few days I find 1, a few days I find 2 etc I can average the number of pennies per day and come up with a mean value that tells me something about the population of pennies ‘out there’ and it will follow a Poisson distribution. If I walk for many more days I can be more and more confident of the mean value (assuming the rate of my neighbours losing pennies is constant) but I then cannot say anything about whether there is any trend with time – eg are my neighbours are being more careless with their small change as time goes by? In order to test whether there is some trend with time I need to look at each individual observation and treat that as the mean value which is what Paul did in the original graph. Yes, if I assume that there is some constant rate of my neighbours losing pennies, some of which I find, I can look at the total counts and I can then get a standard deviation but I would then not have 63 data points all with the same ‘error bar’, I would have one data point with the ‘error bar’ in the time axis spanning 63 years. Yes, you can look at (for the sake of argument) pre 1945 hurricane numbers and post 1945 hurricanes and get a mean and standard deviation from Poisson statistics and infer whether there has been any change between those two periods with some sort of confidence limit but then you only have 2 data points. 23. Posted Jan 7, 2007 at 3:21 AM | Permalink | Reply I would like to announce my official “John A Atlantic Hurricane Prediction” for 2007. After extensive modelling of all of the variables inside of a computer model costing millions of dollars (courtesy of the US taxpayer) and staffed by a team of PhD scientists and computer programmers, I can announce: For 2007, the number of hurricanes forming in the Atlantic will be 6 plus or minus 3 24. Hans Kelp Hey everybody. Speaking of “error bars”, what do you actually mean by that? Is it a definite limit of values which is acceptable as long as they stay within some given boundaries, or is it some definite “borderline” whose going beyond will cancel the veracity, or whatever you might call it, of your calculations? In Danish I think we call it “margin of error”, but I am not sure you mean the same by “error bars” so It makes it easier for me as layman to follow your discussion on this thread. Thank you. HK 25. TAC #22 Il, I think I understand the purpose that the “bars” in the first graphic were intended to serve. My concern had to do with whether use of bars for this purpose deviates from convention. I spent some time looking on the web, expecting to find a clear statement on error bars from either Tukey or Tufte. Unfortuntately, such a statement does not seem to exist. I did find one statement which can, I think, be interpreted to support your position: Note that there really isn’t a standard meaning for the size of an error bar. Common choices are: 1 $\sigma$ (the range would include about 68% of normal data), 2 $\sigma$which is basically the same as 95% limits, and 0.674àƒ’€”$\sigma$ which would include 50% of normal data. The above may be a population standard deviation or a standard deviation of the mean. Because of this lack of standard practice it is critical for the text or figure caption to report the meaning of the error bar. (In my above example, I mean the error bar to be 1 $\sigma$ for the population.) However, this appears in a discussion of plotting large samples, and it seems likely that the word “population” was intended to refer to the sample, not the fitted distribution based on a sample of size N=1. Where does that leave things? Well, I continue to believe that we should reserve error bars for the purpose of displaying uncertainty in data. For the second purpose, to show how well a dataset conforms to a specific population, there are lots of good graphical methods (I usually use side-by-side Boxplots, admittedly non-standard but easily interpreted; I’ve also seen lots of K-S plots, overlain histograms, etc.). However, returning to error bars for the moment, perhaps the important point is already stated in the quote above: “Because of this lack of standard practice it is critical for the text or figure caption to report the meaning of the error bar.” 26. Posted Jan 7, 2007 at 6:39 AM | Permalink | Reply 11, thanks for the reference, I’ll try to find it. Like I said, not very familiar with counting processes. However, let’s still write some thoughts down: Suppose you have a single measurement of N counts and assume a Poisson process. What is the best estimate of the mean? N. What is the best estimate of the variance? N. Yes, the mean of observations from Poisson process is a MVB estimator of intensity (lambda, tau=1), and variance of this estimator is lambda/n, where n is the sample size. And I guess that the mean is best estimate for the process variance as well. But I think you assume that each year we have different process, which confuses me. How about thinking the whole set (n=60 or something) as realizations of one Poisson process, and testing whether it is a good model (i.e. Poisson process, constant lambda, estimate of lambda is 6.1). Plot this constant mean, add 95,99 % bars using Poisson distribution and plot the data to the same figure. 27. richardo If Figures 1 and 2 were presented the other way around, the meaning of the “error bars” in Figure 1 could be presented more logically. From Figure 2 one can deduce that the data are from a Poisson distribution. Each annual count then is an estimate of the Poisson mean, with the one sigma confidence values on that mean as shown. The time series then can be examined to see if there is evidence of a change in the mean of the distribution. 28. James Erlandson There are three types of error here: Sampling error Early samples were taken from land and shipping lanes leaving large areas unsampled or under sampled. The size of this error has gone down with time. Methodological error Which includes everything from indirect methods of estimating location, winds speed and pressure to accuracy and precision of instruments. This also has gone down with time but is still non-zero. Process error We assume that even if the “climate” doesn’t change from year to year, the number of storms will. Any meaningful “error bars” would have to include (estimates of) the above. 29. Jos Verhulst I conclude from these two plots that (…) (2)The trend in Judith Curry’s plot at the top of this thread is a spurious result of the 11 year moving average, an edge effect, and some random upward (barely one standard deviation) fluctuations following 1998. I still don’t understand why the nice fit in the second plot implies the absence of a trend. Suppose that there was a very clear trend, with 1 hurricane in 1945, 2 hurricanes in 1946, … , and finally 12 hurricanes in 2005 and 15 hurricanes in 2006. Figure 2 would remain competely unaltered. So the fact that the Poisson distribution fits the histogram seems irrelevant as far as the existence of a trend is concerned. It is possible to obtain a Poisson distribution with one global rate, just by adding smaller distributions with different rates. 30. TAC #26 UC: I completely agree with what you’ve written. 31. Posted Jan 7, 2007 at 10:42 AM | Permalink | Reply #29, True, histogram doesn’t care about the order. If google didn’t lie, 0.01 0.05 0.95 0.99 quantiles for Poisson(6) are 1 2 10 12, respectively. So, to me, the only problem with Poisson(6) model in this case are the 10 consecutive less-than-averages in 70′s. So the fact that the Poisson distribution fits the histogram seems irrelevant as far as the existence of a trend is concerned Just checked, term ‘trend’ is not in Kendall’s ATS subject index. What are we actually looking for? IMO we should look for possible changes in the intensity parameter. 32. TAC #29 James, your point is well taken. You could have a strong trend and still obey a Poisson distribution. However, it appears that is not the case here; there is no trend in the data. Incidentally, landfalling hurricanes were considered (here), and it seemed that the data were almost too consistent with a simple Poisson process. It made me wonder what was going on. 33. bender TAC, re #9: search on “ergodicity” at CA (or “count ‘em, five”. This was the subject of argument between myself and “tarbaby” Bloom. The counts are known with high (but not 100%) accuracy, but counts are not the issue; it’s the behavior of the climate system that’s the issue, and your desire to make an inference *among* years. If the climate system were to replay itself over 1950-2006, you’d get a different suite of counts. That’s the sense in which “error” is meaningful for a yearly count. This is going to sound fanciful to anyone who has not analysed time-series data from a stochastic system. However it is epistemologically and inferentially correct. 34. richardT #32 What test have you used to establish that there is no trend? A GAM fitted to these data, with Poisson variance, finds significant changes with time (p=0.027). This is only an approximate test, but a second order GLM, again with Poisson variance, is also significant (p=0.039). 35. KevinUK #23 John A “After extensive modelling of all of the variables inside of a computer model costing millions of dollars (courtesy of the US taxpayer) and staffed by a team of PhD scientists and computer programmers, I can announce: For 2007, the number of hurricanes forming in the Atlantic will be 6 plus or minus 3″ I’ve very disapponted that as a fellow UK taxpayer you do not appeciate the fact that inorder to justify the signifcant sums of money we spend of funding this vital (to saving the planet) climate research that your supercomputer can only calculate to one significant figure. As a concerned UK taxpayer I have taken he liberty to once more fire up my retired backofthefagpacket supercomputer (which was retired from AERE Harwell some years ago after it was no longer required to solve the Navier-Stokes equations) and based on its the results it has output my prediction (endorsed by the NERC due to its high degree of precision) is 6.234245638939393 (+/- n/a as this calculation has been peformed by a supercomputer that can calculate pi to at least 22514 ecimal places as memorised by Daniel Tammet). As a UK tax payer I feel that it is important that such calculations must be highly precise and certainly not subject to any uncertainty. As a Church of England vicar I also appreciate that my mortality has already been determined (something which sadly people like Yule did not understand). I do confess however to be puzzled as to why inflation appears to have remained relatively constant and low since 1997 yet as a result of AGW it is now much rainier in the UK? KevinUK 36. TAC #33 Bender, I’m not sure I understand your point. FWIW, I have a bit of familiarity with time series. However, the question here has to do with graphical display of information, and specifically the use of error bars. At the risk of repeating myself, where the plotted points are known without error, by convention (i.e., what I was taught, but it does seem to be accepted by the overwhelming majority of practitioners) one does not employ error bars. Of course I understand your point about ergodicity. I agree there is a perfectly appropriate question about how the observations correspond to the hypothesized stochastic process, and clearly the variance of the process plays a role. As I think we both know, there are plenty of graphical techniques for communicating this information, some of which are mentioned above. But I do not see how this has anything to do with how one plots original data. It is ironic that this debate about proper graphics is occurring in the context of a debate about uncertainty in hurricane count data. For example, I thought Willis (here) presented an elegant way to display the uncertainty of the hurricane count data using both error bars and semicircles. That’s what error bars are for: to communicate the uncertainty in the data (which could be measured values, model results, or whatever). Climate scientists need to get used to thinking this way, and, as with other statistical activities, it is important to employ consistent and defensible methods. In a nutshell, plotting the 2005 hurricane count as 15 +/- 3.8 suggests that there might have been 18 hurricanes in 2005. That’s simply wrong. Said differently, the probability of an 18 in 2005 is zero; the number was 15. That number will never change (unless…). Data are data, data come first, and the properties of the data, including uncertainty, do not depend on the characteristics of some subsequently hypothesized stochastic process (at least in the classical world, where I spend most of my time). Finally, to be clear: I am raising an issue of graphical presentation. If the graphics were done differently — UC had it right in #26 — there would not be a problem. The problem with Figure 1 is that it overloads “error bars” in a way that’s bound to cause confusion. That’s my \$0.02. 37. Steve McIntyre #36. TAC, that makes sense to me as well. 38. TAC #34 RichardT, I may have made a mistake in keying the data, but here are my results showing no significant trend: ``` % Year [1] 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 [21] 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 [41] 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 [61] 2004 2005 2006``` ``` % Hc [1] 7 5 3 5 6 7 11 8 6 6 8 9 4 3 7 7 4 8 3 7 6 4 7 6 4 12 5 6 3 4 4 6 6 [34] 5 5 5 9 7 2 3 5 7 4 3 5 7 8 4 4 4 3 11 9 3 10 8 8 9 4 7 9 15 5 % cor.test(Year,Hc,method="pearson") Pearson's product-moment correlation data: Year and Hc t = 0.9513, df = 61, p-value = 0.3452 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.1307706 0.3579536 sample estimates: cor 0.1209120 % cor.test(Year,Hc,method="kendall") Kendall's rank correlation tau data: Year and Hc z = 0.3654, p-value = 0.7148 alternative hypothesis: true tau is not equal to 0 sample estimates: tau 0.03313820 % cor.test(Year,Hc,method="spearman") Spearman's rank correlation rho data: Year and Hc S = 38969.32, p-value = 0.6145 alternative hypothesis: true rho is not equal to 0 sample estimates: rho 0.06467643 ``` ```Warning message: Cannot compute exact p-values with ties in: cor.test.default(Year, Hc, method = "spearman") % ``` 39. Paul Linsay #36 & 37 What are the errors for the time series of hurricane counts? Suppose you could re-run 1945-2006 over and over like in Groundhog Day. The number of hurricanes observed each year would change from repetition to repetition. You could then take averages and get a good measurement of the mean and variance of the number of hurricanes in each year. But you can’t. So the best you can do is assume each year’s count is drawn from a Poisson process with mean and variance equal to the number of observed hurricanes. The same applies to the histogram. The error on the height of each bin is given by Poisson statistics too. If you fit a function to a series of counts these are the errors that should be used in the fit. This has been standard practice in nuclear physics and its descendents for close to 100 years. As an example here’s a link from a nuclear engineering department. Notice the statement that “Counts should always be reported as A +- a.” 40. James Erlandson Re 36: … where the plotted points are known without error, by convention (i.e., what I was taught, but it does seem to be accepted by the overwhelming majority of practitioners) one does not employ error bars. The plotted points are not known without error. Direct measurements of intensity in the form of wind and pressure observations are seldom available. The eye and area of maximum winds cover a very small area and are unlikely to affect a station directly, especially for a ship whose captain is intent on avoiding the opportunity to observe the most severe part of the tropical cyclone. Observations from anywhere within the circulation are helpful (see Section 2.4) but alone reveal little about intensity (Holland, 1981c; Weatherford and Gray, 1988). The area of destructive winds can be very concentrated, especially in the case of a rapidly developing tropical cyclone. The most common estimates of intensity are those inferred from satellite imagery using Dvorak analysis. It is also possible to monitor the upper-tropospheric warm anomaly directly using passive microwave observations from satellites. Thermodynamic sounding retrievals from the NOAA Microwave Sounding Unit (MSU) have been statistically related to central pressure reduction and maximum winds for tropical cyclones in the North Atlantic (Velden, 1989) and western North Pacific (Velden et al., 1991). The technique has not been used operationally but is expected to have errors similar to the Dvorak analysis. It also performs less effectively on rapidly developing tropical cyclones, but does have the advantage that each estimate is independent of earlier analyses. Thus the Velden technique does not suffer from an accumulation of errors that may occur with a Dvorak analysis. Observing Methods As has been discussed here before. 41. TAC #39 Paul, thanks for the link to the interesting document. You have to read it pretty carefully, and it deals with a slightly different problem (estimating the true mean, a parameter) but it does shed a bit of light on the topic. One particular thing to note: It specifies estimating sigma as the square root of the true mean, m, not the count, n. Thus, following the document’s prescription, all of the error bars in figure 1 should be exactly the same length, as implied by UC (#26) and also in #14. However, if you try to apply this to the hurricane dataset — for a small value of lambda where some of the observed counts are less than the standard deviation — you’ll find it doesn’t work very well (some of your observations will have error bars that go negative, for example). Does this clear anything up or just create more confusion? Let me try a different approach: In 2005, we agree there were n=15 hurricanes. We also agree that the expected count (assuming iid Poisson) was approximately m=lambda=6.1, and therefore the standard deviation of the expected count is approximately 2.5. So here’s the question: Is the n=15 a datapoint or an estimator for lambda? I expect your answer will be “both” — but I’m not sure. Anyway, I’d be interested in your response. 42. TAC #40 James, I concede the point. However, in the interest of resolving the issue about error bars, can we, just for the moment, pretend that things are measured without error? Thanks! 43. Steve McIntyre #41. A question – hurricanes are defined here as counts with a wind speed GT 65 knots. There’s nothing physical about 65 knots. Counts based on other cutoffs have different distributions – for example, cat4 hurricanes don’t have a Poisson distribution, but more like a negative exponential or some other tail distribution. Hurricanes are a subset of cyclones, which in turn are a subset of something else. If hurricanes have a Poisson distribution, can cyclones also have a Poisson distribution? I would have thought that if cyclones had a Poisson distribution, then hurricanes would have a tail distribution. Or if hurricanes have a Poisson distribution, then what distribution would cyclones have. Just wondering – there’s probably a very simple answer. 44. Dave Dardinger re: #43 …if hurricanes have a Poisson distribution, then what distribution would cyclones have. Just wondering – there’s probably a very simple answer. I think it’d be similar to a pass-fail class. You might set an arbitrary value for pass on each individual test during the semester and then you could then give a number of tests, some harder and some easier, randomly. You’d have a Poisson distribution, possibly. But you might also, on adding up the various test results for each individual in the class decide you want to set things for 85% of students passing and this could be set at whatever value gives you that ratio. This could be regarded as a physical result while the arbitrary value for passing an individual test would not be. As for cyclones, they’d be everything which passes the “cyclone” test. There’d be a finite number of such cyclones each season, so they should distribute just like hurricanes do. Just with larger numbers. I think someone here was saying that as the numbers get larger, the curve gets more like a normal distribution, which makes sense, so I’d expect the distribution of cyclones to be more normal than the distribution of hurricanes. This being the case, it would seen that the distribution of a tail is simply a poisson distribution with the pool of “passing” candidates starting where the tail was chopped off. 45. bender Re #42 Wait a sec, TAC. Don’t concede too much here. Measurement error and sampling error are different things. This issue is all about sampling error. [Linsay's #39 clarifies my ergodicity argument.] The hypothesis we want to test is whether the observed counts are likely to be drawn from a rondom poission process (or truncated possoin, whatever) with fixed mean = fixed variance. Each year is a random sample from a stochastic process. Alternative hypothesis: there is a trend in the mean. Paul Linsay has plotted the variance around each year’s observation as though that observation *were* that year’s mean. That’s wrong, and that’s why your complaint about the difference count-variance dropping below zero for low counts is valid. The reason it’s wrong is that the null hypothesis is that there is only one fixed mean. If any obs fall outside the interval, such as 2005, there’s a chance we’re wrong. If, further, the proportion of observations falling outside the 95% confidence interval increases with time, then there’s a trend and the mean is not fixed. All this assumes that the variance is constant with time. But there is no reason this must be true. If the process is nonstationary, it is not ergodic. Then inferences about trends starts to get dicey. That’s where the statistical approach breaks down and the physical models start to play a role. 46. McCall re: 44 cyclones … and by extension, tornados exhibit a poisson distribution? In an AGW theory of everything, shouldn’t f3-f5′s be increasing in frequency? Or is it because they do not tap directly into the catastrophic global increase of SSTs? 47. Willis Eschenbach SINUSOIDAL POISSON DISTRIBUTION Paul, a very interesting post. I disagree, however, when you say: (1)The annual hurricane counts from 1945 through 2006 are 100% compatible with a random Poisson process with a mean of 6.1 hurricanes per year. The trends and groupings seen in Figure 1 are due to random fluctuations and nothing more. When I looked at the distribution, it reminded me of a kind of distribution I have seen before in sea surface temperatures, which has two peaks instead of one. I’ve been investigating the properties of this kind of distribution, which seems to be a combination of a classical distribution (e.g. Poisson, Gaussian) with what (because I don’t know the name for it) I call a “sinusoidal distribution.” (This is one of the joys of not knowing a whole lot about a subject, that I can discover things. They’ve probably been discovered before … but I can come at them without preconception, and get the joy of discovery.) A sinusoidal distribution arises when there is a sinusoidal component in a stochastic process. The underlying sinusoidal distribution is the distribution of the y-values (equal to sin(x)) in a cycle. The distribution is given by d(arcsin(x))/dt. This is equal to 1/sqrt(1-x^2), where x varies from -1 to 1. The distribution looks like this: As you can see, the sine wave spends most of its time at the extremes, and very little time in the middle of the distribution. Note the “U” shape of the distribution. I looked at combinations of the sinusoidal with the poisson distribution. Here is a typical example: Next, here is the distribution of the detrended cyclone count, 1851-2006. Note the “U-shape” of the peak of the histogram. Also, note that the theoretical Poisson curve is to the left of the sides of the actual data. This is another sign that we are dealing with a sinusoidal Poisson distribution, and is visible in Figure 2 at the head of this thread. One of the curiosities of the sinusoidal distribution is that the width (between the peaks) is approximately equal to the amplitude of the sine wave. From the figure above, we can see that we are looking for an underlying sinusoidal component with an amplitude of ~ 4 cyclones peak to peak. A periodicity analysis of the detrended cyclone data indicates a strong peak at about 60 years. A fit for a sinusoidal wave shows the following: Clearly, before doing any kind of statistical analysis of the cyclone data, it is first necessary to remove the major sinusoidal signal. Once the main sinusoidal signal is removed, the reduced dataset looks like this: As you can see, the fit to a Poisson distribution is much better once we remove the underlying sinusoidal signal. CONCLUSION As Jean S. pointed out somewhere, before we set about any statistical analysis, it is crucial to first determine the underlying distribution. In this case (and perhaps in many others in climate science), the existence of an underlying sinusoidal cycle can distort the underlying distribution in a significant way. While it might be possible to determine the statistical features (mean, standard deviation, etc.) for a particular combined distribution, it seems simpler to me to remove the sinusoidal component before doing the analysis. My best to everyone, and special thanks to Steve M. for providing this statistical wonderland wherein we can discuss these matters. w. 48. IL TAC plus bender #45. I see we are arguing about 2 different things here, the first perhaps a little more to do with semantics, the second more substantial. In #36 TAC argued that if the counts for 2005 were 15 then (assuming perfect recording capability) that was an exact number and so should have no error bar. He (? sorry, shouldn’t make assumptions) in #36 says that if you put 15+/-3.6 then that implies that the count could have been 18 which is wrong. I disagree. I think those are confidence limits – estimators if you prefer. In a physics experiment if I measure something numerous times with a measurement error, what the error bar is telling me is what is the probability that if I measure it again that I will get within a certain range of that value. In a literal sense we can’t ‘run’ 2005 again but the graph as plotted by Paul Linsay is meaningful to me as the confidence limits for each count – if we were to have another year with the same physical conditions as 2005, what is the likelihood that we would get 15 storms again. To me the correct answer is 15+/-3.6 (1 sigma). Its like the example from the physics web page linked by Paul, if I measure the number of radioactive decays per minute and I record the decays per minute for an hour. After the hour I have 60 measurements each of an exact number but that does not say to me that there are no error bars on those numbers. If I have N counts in the first minute then the standard deviation (the expectation of what I might get in the second minute) is root N. If I was to plot all of those 60 measurements against time I would plot each count value with its own individual confidence limit of root N (N being the particular count in that particular minute). Yes, if I take all the counts for the whole 60 minutes I can get a mean value for the hour. The confidence limit on that mean value will be root of all the counts in the hour (approximately 60N and the standard deviation will be root 60N). But that is not the confidence limit on an individual measurement. I could plot that mean value with its confidence limit root 60N but I would then only have a single point on the graph. Paul says he comes from a physics background, so do I and maybe this is where we are differing with TAC, Bender etc (maybe, again don’t want to make assumptions). In #45 Bender says that it is wrong to plot each point with the variance given by the mean of that point. Sorry Bender, I disagree with you, I believe it is correct. I am looking at it from the perspective of the radioactive decay counting experiment described above. You can reduce the uncertainty by summing together different years (although your uncertainty only decreases by root of the number of years that you sum) but then you have reduced the number of data points that you have! If you have the 63 years’ worth of individual year measurements then each individual one has a standard deviation which should be plotted, they are not all the same. If you want to address hypotheses such as ‘are the counts changing with time’ then you have to address the uncertainty in each data point if you retain all 63 data points and look for a trend that is significant above that noise level. Yes, you can reduce the uncertainty by summing years but then you have a lot less data points on your graph. No, the variance count does NOT drop below zero because this is Poisson statistics, it is asymmetrical and doesn’t go below zero. 49. Posted Jan 8, 2007 at 1:55 AM | Permalink | Reply Let’s see if I clarify or confuse: What are the errors for the time series of hurricane counts? Suppose you could re-run 1945-2006 over and over like in Groundhog Day. The number of hurricanes observed each year would change from repetition to repetition. You could then take averages and get a good measurement of the mean and variance of the number of hurricanes in each year. But you can’t. So the best you can do is assume each year’s count is drawn from a Poisson process with mean and variance equal to the number of observed hurricanes. Yes, we are testing H0: the data are samples from Poisson(lambda) process. We don’t know lambda. We need to estimate it using the data. We want to find a function of the observations (a statistic) that tells something about this unknown lambda. Now, assuming H0 is true, we have quite a lot of theory behind us telling that average of the observations (say $\hat{x}[\tex]) is the best ever estimate of lambda (MVB, for example). We also know that variance of this estimator is$latex \hat{x}/n[\tex], where n is the number of independent observations. You cannot find unbiased estimator that has smaller variance than this. Estimate obtained this way will be distributed more closely round the true lambda than any other estimator. And the number of observations in this case is 60 (?), so we can see that sampling variance is small (I think I could say ‘sampling error is very small’ in this context as well). You can try this with simulated Poisson processes, take 60 samples of Poisson(6), take the average, and see how often it is less than 5 or more than 7. If you take only one sample, the sampling error is much larger. This is actually shown in Figure 1 (IMO). Completely another business is the testing of H0. Knowing that lambda is close to 6, we can compute [0.01 0.05 0.95 0.99] quantiles from Poisson(6) – if, for example 0.99 point exceeded 10 times in the data, we can suspect that our distribution model is not OK. In addition, there should be no serial correlation in the data if H0 is true. And, I didn’t mention measurement errors in the above. Measurement errors affect both estimate of lambda and testing of H0. And finally, if Figure 1. bars represents sampling error, there is basically nothing wrong with it. Sampling variance is \$latex \hat{x}/n[\tex], n=1. But I think that approach doesn’t bring up very powerful test for H0. 50. bender Re #47 If sinusoidal, why not “multistable”? You see the problem? Detection & attribution. (What is the frequency of your sinusoid? Is there more than one frequency? Or maybe the process is “multistable”? What are the attracting states?) Problem specification is a mess. Stationary random Poisson is just a starting point. The biggest reason arguing against Linsay’s Poisson is the high bias in the 1-2 counts and the low-bias in 3-4 counts. Systematic bias usually implies error in model specification. Essenbach’s cycle-removed Poisson, interestingly, removes this systematic bias. 51. bender Re #48 In #45 Bender says that it is wrong to plot each point with the variance given by the mean of that point. Sorry Bender, I disagree with you, Unfortunately, I am correct (in terms of the inferences that are being attempted with these data, which assume that you are observing a single stochastic process). If you suppose that the mean fluctuates from year to year (hence plotting variance around each observation, not the series mean), then you can not suppose that the variance is fixed. Either you are observing one stochastic process (one mean, one variance), or you observing more than one (multiple means, multiple variances) which has been stitched together to appear as one. Why would you ever plot a variance around an observation (as opposed to a mean)? What would that tell you? 52. bender Re #49 Let’s see if I clarify or confuse You clarify, at least up until the very end: if Figure 1. bars represents sampling error, there is basically nothing wrong with it. What’s right about plotting error around observations? Error is to be plotted on means; individual observations fall inside or outside those error limits. 53. bender Re #48 In #36 TAC argued that if the counts for 2005 were 15 then (assuming perfect recording capability) that was an exact number and so should have no error bar. He (? sorry, shouldn’t make assumptions) in #36 says that if you put 15+/-3.6 then that implies that the count could have been 18 which is wrong. It’s not 15 +/- 3.6 that should be plotted, it’s 6.1 +/- 3.6. If the series is stationary then the obs. 15 should be compared to that. In which case 15 is extreme. Essenbach shows that the mean and variance might not be stationary but sinusoidal. Now the obs 15 must be compared to the potential range calculated during 2005 to determine if it’s extreme. In which case 15 is still extreme. 54. Posted Jan 8, 2007 at 2:34 AM | Permalink | Reply What’s right about plotting error around observations? Error is to be plotted on means; individual observations fall inside or outside those error limits. around observation. Mean of one sample is the sample itself. Observe only one sample from Poisson with unknown lambda. Best estimate of the lambda is the observation itself. Sampling variance is the observation itself. 55. richardT #38 The Pearson Correlation coefficient you are using to find trends assumes that the data have a Gaussian distribution. Given the discussion of the Poisson nature of the data on this page, this choice need justifying. A more appropriate test for a linear trend when the data have Poisson variance is to use a generalised linear model with a Poission error distribution. I’ve done this, and you are correct, there is no linear trend. The absence of a linear trend does not imply that the mean is constant – there may be a more complex relationships with time. This might be sinusoidal (#47) but alternative exploratory model is a generalized additive model. A GAM finds significant changes in the mean. (This test is only appropriate) Everybody (except #47) seems to be happy with the statement that the “annual hurricane counts from 1945 through 2006 are 100% compatible with a random Poisson process” without any goodness of fit test. Are they? Doesn’t anyone want to test this assertion? `curry` 56. TAC #55 RichardT The Pearson Correlation coefficient you are using to find trends assumes that the data have a Gaussian distribution. Given the discussion of the Poisson nature of the data on this page, this choice need justifying. This point is well taken. The Pearson version, as you correctly note, requires an assumption about normality. Given that we are looking at Poisson (?) data, there is reason to wonder about the robustness of the test. In #38 I also provided results from two nonparametric tests: Kendall’s tau and Spearman’s rho. They do not require a distributional assumption. Also, because they are relatively powerful tests even when errors are normal, they are attractive alternatives to Pearson. However, while these tests are robust against mis-specification of the distribution, they are not robust against, for example, “red noise”. SteveM has written a lot on this topic; search CA for “red noise” and “trend”. Among other things, “red noise” can lead to a very high type 1 error rate — you find too many trends in trend-free stochastic processes. Note that in this case all three tests, as well as an eyeball assessment, find no evidence of trend (the p-values for the 3 tests are .35, .71, and .61, respectively (#38)). I think we can safely conclude that whatever trend there is in the underlying process is small compared to the natural variability. 57. richardT #56 These non-parametric tests, which will cope with the Poisson variance, can still only detect linear trends. If the relationship between hurricane counts and year is not linear, these tests are suitable, and there will be a high Type-2 error. Consider this code ```x Even though there is an obvious relationship between x and y, the linear correlation test fails to find it.``` ``` ``` `You correctly state that these tests non-parametric are not robust against red noise. If the hurricane counts are autocorrelated, then they are not from an idd Poisson process, and the claim that the mean is constant is incorrect.` 58. richardT The code ``` x=-10:10; y=x^2+rnorm(length(x),0,1) cor.test(x,y)#p=0.959 for my simulation ``` 59. TAC First, I want to join Willis (#47) in offering “special thanks to Steve M. for providing this statistical wonderland wherein we can discuss these matters.” CA really is an amazing place. I would also call attention to Willis’s elegant and expressive graphics. Graphics are not a trivial thing; they are a critical component of statistical analyses (for those who disagree, Edward Tufte (author of “The Visual Display of Quantitative Information,” one of the most beautiful books ever written) presents excellent counter-arguments including an utterly convincing analysis of the 1986 Challenger disaster — it resulted from bad graphics!). WRT #44 – #56, there’s been a lot of comments, almost all of them interesting. I think bender has it right, but the arguments on both sides deserve careful consideration. The reason for the disagreement, as I see it, has to do with ambiguity about rules for graphical presentation, specifically what is “conventional,” and some confusion about what we are trying to represent with the figure. There is a also subtlety here that I am not sure I fully understand myself, but here goes: When you plot parameter estimates, error bars mean one thing (some sort of measure of the distance between the estimated value and the true parameter); when you plot data, they mean another (corresponding to the distance between the observed value and that single realization of the process). These two types of uncertainty are recognized, respectively, as epistemic and aleatory uncertainty (aka parameter uncertainty and natural variability). That’s why I asked the question in #41 Is the n=15 a datapoint or an estimator for lambda? which Il answered in #48 I think those are confidence limits – estimators if you prefer. When looking at a graphic, the clue about which type of uncertainty is presented — i.e. what the error bars refer to — is whether or not we are looking at data or estimates. When estimates are based on the mean of samples of size N=1, as is the case here, there is an obvious problem: The viewer may assume that the plotted points are data. However, you could argue, as some have, that these are not data but estimates based on samples of size N=1. (IMHO, it makes no sense to estimate from samples of size N=1 when you have 63 observations available; but that’s another discussion). Unless care is taken in how the graphic is constructed, the ambiguity is bound to cause confusion. 60. TAC Oops! Please note that in the last paragraph of #59, the “aleatory/natural variability” is incorrectly defined. 61. TAC #59 Now that I look at it, the whole section containing the word “aleatory” is a mess. Best just to ignore it. 62. IL #51 Bender, sorry to prolong an argument, but Unfortunately, I am correct (in terms of the inferences that are being attempted with these data, which assume that you are observing a single stochastic process). If you suppose that the mean fluctuates from year to year (hence plotting variance around each observation, not the series mean), then you can not suppose that the variance is fixed. Either you are observing one stochastic process (one mean, one variance), or you observing more than one (multiple means, multiple variances) which has been stitched together to appear as one. Why would you ever plot a variance around an observation (as opposed to a mean)? What would that tell you? No, I don’t think you are and I’m not sure you have understood what I was arguing based on your response additionally in #53. This is not a case where we have some large population where there is some defined population mean and standard deviation and by repeatedly sampling that population we can determine the mean and standard deviation of the sample and from that determine the whole data set which is what you seem to be arguing when you think that each data point is representative of that mean and should have that same variance. When you have a few random, independent processes like radioactive decay, and I submit, for the storm counts per year, you do not have the situation that you describe. Have you ever done an experiment where you have small counting statistics like a radioactive decay counting experiment for example? The process is described well in that link that Paul Linsay gave and it additionally describes the standard deviation on each individual data point as root N. http://nucleus.wpi.edu/Reactor/Labs/R-stat.html Exactly the same situation applies to photons recorded by a photomultiplier but also to more commonplace situations such as calls to a call centre, admissions to a hospital etc. Each summed count within a time interval gives a number – the mean therefore for that time interval which if the count is N in that time interval, the variance is N and the standard deviation is root N. Each individual observation – each time period in the radioactive counting experiment or each year in the counting storms ‘experiment’ is an individual number subject to counting statistics. Where you have small probabilities of things happening but a large number of trials so that probability x trials = a significant number, you are subject to counting statistics. Then each data point (each year in the storms’ case that started everything off) has its own variance based on the number of storms counted in that year. Here I am making no assumptions about time stationarity or anything else, I am just looking at a sequence of small numbers generated by a process subject to counting statistics. This whole debate started because Paul Linsay put counting statistics confidence limits on the data points in figure 1. These are different for each point and I agree with him, you do not have the same confidence limit on each data point because each one is an independent measurement. If you want to compare them – to look for anomalous values or correlations with time or anything else then we must look at the confidence limits for a particular year, if you wish to test if a particular year is anomalous or we test for changes in the mean if we wish to test if there is some correlation with time. The mean value for all of the 63 years is 6.1 – suppose a particular year records for example 15. You might want to know what is the probability that that year is anomalous and you would look at the variance of that year which is root 15 and compare it with other years. If you want to compare with neighbouring single years then you would be comparing with the mean and standard variation of each of those individual years, if compared with the remaining 62 years then you would sum up all 62 years and derive a mean value and standard deviation which is root of all the storms in the 62 years. 63. TAC #62 IL: Please take another look at the article that Paul provided and that you cite. It does not say: the standard deviation on each individual data point as root N. Rather — and this is important — it specifies root M, where M is the true process mean (i.e. the expected value of N), which is not the observed value N. 64. Paul Linsay Since this has become a forum on measurement error, let’s continue. It’s an interesting topic all by itself. When you make a measurement there are two sources of error. One due to your instrument and the second due to natural fluctuations in the variable that you are measuring. The total error is the quadrature sum of these two. An an example, consider measuring a current, I. It is subject to a natural fluctuation known as shot noise with a variance proportional to I. I can build a current meter that has an intrinsic error well below shot noise so that the error in any measurement is entirely due to shot noise. Now I take one measurement of the current. It’s value happens to be I_1, hence the assigned error is +-sqrt(I_1) at the one sigma level. The experimental parameters change and a measurement of the current gives I_2, this time with an assigned error of +- sqrt(I_2). And so on. I never take more than one measurement in each situation but no one would argue with my assignment of measurement error. [Maybe bender would, I don't know!] Now translate this to the case of the hurricanes. Instrumental error is zero. I can count the number this year perfectly, it’s N. If hurricanes are due to a Poisson process the count has an intrinsic variance of N. Hence the assigned hurricane count error is sqrt(N), exactly what I did in Figure 1. #47, Willis The bin heights of the histograms are subject to Poisson statistics too. Hence the errors are +-sqrt(bin height). You have to show that the fluctuations in the distributions are significantly outside these errors to warrant the sine wave. To paraphrase the old joke about the earth being supported by a turtle: It’s Poisson statistics all the way down. 65. IL #63. Maybe that wasn’t the best link to work with since it discusses Gaussian profiles and says that Poisson statistics are too difficult to work with! Its only the true process mean M when you have a large number of counts so that a Gaussian is an appropriate statistics to use. It makes the point lower that when you have a single count, then the count becomes the estimate of the mean. I therefore go back to my point that the appropriate confidence limit on a single year’s storm counts is root N where N is the number of counts in that year. If I had thousands of year’s worth of data, on your argument I would take the standard deviation of the total number of storms which would number (say) 10,000 which the standard deviation would be a 100. Are you going to argue that the appropriate error on each individual year is +/-100?, or a fractional error of 100/10000 = 0.01 (times the mean of 6.1 = confidence limit of 0.061 on each individual data point)? The former is clearly nonsense and the latter is wrong because each year does not ‘know’ that there are thousands of years worth of data. Its like tossing coins, I can toss a coin thousands of times and get a very precise mean value with precisely determined standard deviation but if I toss a coin again, that is not appropriate for working out the probability of what is going to happen next time since the coin only ‘knows’ what the probability is of it coming up a particular result for the next throw! Ditto if I want to look at an individual year in that sequence I have to look at the count I have got for that year. Please note, in that example of counting for thousands of years and getting 10,000 storms I would be confident that I could determine the mean over all those years with a confidence limit of 1% but the uncertainty in time on that mean value would then span that whole period of thousands of years. If I want to see if there are long term trends in that data I can combine 100 years at a time to reduce the fractionational error in each of the mean values for each of those centuries and compare the mean value for each of those centuries with its standard deviation to test whether there is significant change with time. But then you would plot a single mean value for each century with the confidence limit on the time axis of one century. 66. Steve McIntyre #47. Willis, this is rather fun. Off the top of my head, your arc sine graphic reminded me of two situations. First, in Yule’s paper on spurious correlation, he has a graphic that looks like your sinusoidal graphic. Second, arc sine distributions occur in an extremely important random walk theorem (Feller). The amount of time that a random walk spends on one side of 0 follows an arc sine distribution.  When I googled “arc sine feller”, I turned up a  climateaudit discussion here that I’d forgotten about: http://www.climateaudit.org/?p=310 . So there might be some way of getting to an arc sine contribution without underlying cyclicity (which I am extremely reluctant to hypothesize in these matter.) 67. Posted Jan 8, 2007 at 10:05 AM | Permalink | Reply 49,54 Correction: lambda/n is the variance of the estimator, we don’t know lambda, but using $\hat{x}/n[\tex] wouldn't be too dangerous I guess(?). Compare to normal distribution case (estimate the mean, known variance sigma2),$latex \hat{x}[\tex] is the MVB estimator of the mean, with variance sigma2/n. 68. Tim Ball I find Willis’ point (#47) about eliminating the sinusoidal wave interesting from a statistical perspective, but it also speaks to the underlying climate issues. What is the cause and climatic signficance of the sinusoidal pattern Willis eliminated? You have a record of 63 years, which in climate terms is virtually nothing. I have long argued that the use of a 30 year ‘normal’ as a statistical requirement is inappropriate for climate studies and weather forecasting. Current forecasting techniques assume the pattern within the ‘official’ record is representative of the entire record over hundreds of years and holds for any period, when this is not the case. It is not even the case when you extend the record out beyond 100 years. The input variables and their relative strengths vary over time so those of influence in one thirty year period are unlikely to be those of another thirty year period. Climate patterns are made up of a vast array of cyclical inputs from cosmic radiation to geothermal pulses from the magma underlying the crust. In between is the sun as the main source of energy input with many other cycles from the Milankovitch of 100,000 years to the 11 year (9-13 year variability) Hale sunspot cycle and those within the electromagnetic radiation. We could also include the sun’s orbit around the Milky Way and the 250 million year cycle associated with the transit through arms of galactic dust. My point is the 63 year record is a composite of so many cycles both known and unknown that to sort them out in even a cursory way is virtually impossible with current knowledge. Is the 63 year period part of a larger upward or downward cycle, which in turn is part of an even larger upward or downward cycle? Now throw in singular events such as phreatic volcanic eruptions, which can measurably affect global temperatures for up to 10 years and you have a detection of overlappping causes problem of monumental proportions. 69. Posted Jan 8, 2007 at 10:35 AM | Permalink | Reply Interesting discussion. #55 Everybody (except #47) seems to be happy with the statement that the “annual hurricane counts from 1945 through 2006 are 100% compatible with a random Poisson process” without any goodness of fit test. Are they? Doesn’t anyone want to test this assertion? Not 100 % compatible, that would be suspicious. And I think if I estimate lambda from observations, and then observe that 0.01 and 0.99 quantiles are exceeded only once with n=60, I think I have made kind of goodness of fit test. I don’t claim that it is optimal test, but at least I did it #62 This is not a case where we have some large population where there is some defined population mean and standard deviation and by repeatedly sampling that population we can determine the mean and standard deviation of the sample and from that determine the whole data set which is what you seem to be arguing when you think that each data point is representative of that mean and should have that same variance. Having trouble understanding what you are saying (sorry). In your link it is said that In practice we often have the opportunity to take only one count of a sample. IMO this is not the case here. TAC seems to agree with me. #64 When you make a measurement there are two sources of error. One due to your instrument and the second due to natural fluctuations in the variable that you are measuring. The total error is the quadrature sum of these two. Makes no sense to me. 70. Ken Fritsch Re: #56 Note that in this case all three tests, as well as an eyeball assessment, find no evidence of trend (the p-values for the 3 tests are .35, .71, and .61, respectively (#38)). I think we can safely conclude that whatever trend there is in the underlying process is small compared to the natural variability. I have to continue going back to this statement and others like it to keep, what I view as the critical result coming out of this discussion, firmly in mind. To a layman with my statistical background I find the discussion about the Poisson distribution (and beyond) interesting and informative, but I also am inclined to view it as cutting the analysis of the data a bit too fine at this point. I would guess that a chi square goodness of fit test or a kurtosis/skewness test for normality would not eliminate a Poisson and/or a normal distribution as applying here (without the sinusoidal correction). Intuitively, if one considers the TC event as occurring more or less randomly and based on the chance confluence of physical factors, the Poisson probability makes sense to me. I agree with the Bender view on applicability of statistics and errors (but not necessarily extended to valuations of young NFL QBs) and his demands for error display bars. I have heard the stochastic mingling with physical processes arguments before but I keep going back to: stochastic processes arise from the study of fluctuations in physical systems. Standard statistical distributions can be helpful in understanding and working with real life events but I am also aware of those fat tails that apply to real life (and maybe the 2005 TC NATL storm season). 71. jae 68, Tim Ball: great post! 72. Steve Sadlov Steve Sadlov’s 2007 prediction : 6.1 +/- 2.449489743 — LOL! 73. Steve Sadlov Sorry I meant 6.1 +/- 2.469817807 74. Count Iblis It would be more interesting to find a 95% confidence interval for any hypothetical trend that can be included/hidden in the noisy data. Do climate models make predictions that are outside this confidence interval? 75. Bob K Paul, I see John Brignell gave your post a mention at his site. A little over half way down the page. 76. Willis Eschenbach Steve M, you say: So there might be some way of getting to an arc sine contribution without underlying cyclicity (which I am extremely reluctant to hypothesize in these matter.) I agree whole heartedly. I hate to do it because it assumes facts not in evidence. I’ll take a look at your citation. Basically, what happens is that lambda varies with time. It probably is possible to remove the effects of that without assuming an underlying cycle. Exactly how to do that … unknown. w. 77. TAC #65 IL: Thank you for your thoughtful comments. Believe me: I understand your argument. I am familiar with the statistics of radioactive decay, and I know something about how physicists graph count data. The error bars corresponding to that problem — you describe it well — are designed to serve a specific purpose: To communicate what we know about the parameter lambda. The “root N” error bars (though not optimal (see below)), are often used in this situation, and they are likely OK so long as the product of the arrival rate and the time interval is reasonably large. I have no argument on these points. So what’s the issue? Well, we’re not dealing with radioactive decay, or with any of the other examples you cite. We’re dealing with statistical time series, and, IMHO, the relevant conventions for plotting such data come from the field of time series analysis, not radioactive decay. Specifically, when you plot a time series with error bars, the error bars are interpreted to indicate uncertainty in the plotted values. That’s what people expect. At least that’s what my cultural background leads me to believe. [This discussion has a peculiar post-modern feel. Perhaps a sociologist of science can step in and explain what's going on here?]. Anyway, here are some responses to other comments: I therefore go back to my point that the appropriate confidence limit on a single year’s storm counts is root N where N is the number of counts in that year. This is approximately correct if the only sample of the population that you have is the N observations and you are concerned with estimating the uncertainty in the arrival rate. If you want a confidence interval for the observed number of arrivals, however, the answer is [N,N]. (Incidentally, the root N formula is actually not a very good estimator of the standard error. For one thing, if you happen to get zero arrivals, you would conclude that the arrival rate was zero with no uncertainty). If I had thousands of year’s worth of data, on your argument I would take the standard deviation of the total number of storms which would number (say) 10,000 which the standard deviation would be a 100. Are you going to argue that the appropriate error on each individual year is +/-100?, or a fractional error of 100/10000 = 0.01 (times the mean of 6.1 = confidence limit of 0.061 on each individual data point)? That’s a good point. Under the null hypothesis we have one population (of 63 iid Poisson variates). To estimate lambda, just add up all the events and divide by 63. Then I would plot the data — the 63 observations, no error bars — and, as bender suggests, perhaps overlay the figure with horizontal lines indicating the estimated mean of lambda (imagine a black line), an estimated confidence interval for lambda (blue dashes), and maybe some estimated population quantiles (red dots). However, I would not attach error bars to the fixed observation. You know: The observation is fixed, right? However, the overlay would describe the uncertainty in lambda as well as the estimated population quantiles. Its like tossing coins, I can toss a coin thousands of times and get a very precise mean value with precisely determined standard deviation but if I toss a coin again, that is not appropriate for working out the probability of what is going to happen next time since the coin only “knows’ what the probability is of it coming up a particular result for the next throw! OK. So how would you plot error bars for the time series of coin tosses? Note: Coin tosses can be modelled as a Bernoulli rv, whose variance is given by N*p*(1-p); since N=1, \hat{p} is always equal to either zero or one, and your error bars have length zero… 78. Louis Hissink If it’s random then that means we have no clue at all what causes it. Neatly done Paul. 79. IL #77 TAC. Thanks for your comments, particularly the first paragraph seems to indicate that maybe we are not as far apart as I thought. Maybe this is a difference from different areas of science and we are arguing about presentation rather than substance but, to me, Paul’s figure 1 is correct and meaningful. What you say Then I would plot the data “¢’‚¬? the 63 observations, no error bars “¢’‚¬? and, as bender suggests, perhaps overlay the figure with horizontal lines indicating the estimated mean of lambda (imagine a black line), an estimated confidence interval for lambda (blue dashes), and maybe some estimated population quantiles (red dots). However, I would not attach error bars to the fixed observation. You know: The observation is fixed, right? However, the overlay would describe the uncertainty in lambda as well as the estimated population quantiles. doesn’t make sense to me because if you have assumed a null hypothesis of no time variation and have summed all the 63 years’ worth of data then we only have one data point, the mean of the whole ensemble, with smaller uncertainty on that ensemble mean but spanning the 63 years. What you suggest is having your cake and eating it by taking the data from the mean and applying that to individual data points. I can see what you are getting at but in all the fields I have worked in (physics related) what you suggest would be thrown out as misleading. I would never see a data point with no error bar because I always see predictors and even if its a perfect observation of a discrete number of storms, to present that as a perfect number to me lies about the underlying physics. Perhaps ultimately as long as there is good description of what is going on and we calculate confidence limits, significance of anomalous readings and trends correctly then maybe it doesn’t matter too much. I still think though that the basic problem here between us is when you say So what’s the issue? Well, we’re not dealing with radioactive decay, or with any of the other examples you cite. We’re dealing with statistical time series, and, IMHO, the relevant conventions for plotting such data come from the field of time series analysis, not radioactive decay. Specifically, when you plot a time series with error bars, the error bars are interpreted to indicate uncertainty in the plotted values. That’s what people expect. At least that’s what my cultural background leads me to believe. No, physically this is exactly like radioactive decay or finding the pennies that I described or admissions to a hospital or any of those similar situations where counting statistics applies – the underlying physics is the same where we have a very small probability of a storm arising in a particular time or place but over a year there are a few. It is not then a statistical time series where I am sampling from a larger population with some underlying mean and variance. I guess as I say, as long as we correctly calculate the significance of time variations etc and its well explained what is done or displayed then this discussion has probably gone about as far as it can. My final 2p on all of this. To me Paul’s figure 1 conveys correctly the uncertainties inherent in the physics, what you and Bender suggest to me with my background is misleading and what Judith Curry presented (way back in the othe thread that started all of this with the 11 year moving average) is wrong and dangerously misleading. 80. Posted Jan 9, 2007 at 2:05 AM | Permalink | Reply Paul: 1) What if the count for some year is zero (TAC’s point in 77)? 2) How would you draw those bars if you assume Gaussian distribution instead of Poisson? When you make a measurement there are two sources of error. One due to your instrument and the second due to natural fluctuations in the variable that you are measuring. The total error is the quadrature sum of these two. I think I understand now (pl. correct if I’m wrong!). You observe y(t), y(t)=x(t)+n(t), n(t) is error due to instrument. x(t) is a stochastic process. x(t) varies over time, and you are not very interested of x(t) per se, you want to get more general estimate: what is x(t+T), x(x-T), etc. If the process is stationary, it has an expected value. Your second error is E(x)-x(t), am I right? If so, 1)I think that ‘error’ is misleading term 2) ‘natural fluctuations’ without explanation opens the gate for 9-year averages and Ritson’s coefficients. If you define it as stochastic process, you’ll have many tools that are not ad hoc (Kalman filter, for example), to deal with the problem. Often ad hoc methods are as effective as carefully defined statistical procedures, but the difference is that the latter gives less degrees of freedom for the researcher. If you have 2^16 options to manipulate your data, you’ll get any result you want from any data set. Popper wouldn’t like that. 81. TAC #79 IL: I agree that our difference has to do almost entirely with form, not substance, and even there I agree we’re not far apart. When you say: No, physically [statistically?] this is exactly like radioactive decay or finding the pennies that I described or admissions to a hospital or any of those similar situations where counting statistics applies – the underlying physics [statistics?] is the same where we have a very small probability of a storm arising in a particular time or place but over a year there are a few. My only quibble would be that the physics are different; its the stats that are the same; and the “cultural context” — the graphical conventions employed by the target audience — differ. So, the remaining issue: How to communicate the message, which we agree on, as unambiguously as possible to the community we want to reach. As I understand it, you are comfortable with — prefer — error bars attached to original data; I worry that such error bars introduce ambiguity to the figure (I also question their statistical interpretation, but that’s a secondary issue). I prefer an overlay or separate graphics. Of course, we do not have to resolve this. But, having now debated this thorny issue for half a week, perhaps this could be a real contribution to the literature. Consistency and rigor in graphics is important — perhaps as important as consistency and rigor in statistics, though less appreciated. Perhaps we could come up with a whole new graphical method for plotting Poisson time series — get Willis involved to ensure the aesthetics, and other CA regulars who wanted to get involved could contribute — and share it with the world . I say we name it after SteveM! Time to get some coffee… 82. TAC #79 IL: One final point: doesn’t make sense to me because if you have assumed a null hypothesis of no time variation and have summed all the 63 years’ worth of data then we only have one data point, the mean of the whole ensemble, with smaller uncertainty on that ensemble mean but spanning the 63 years. What you suggest is having your cake and eating it by taking the data from the mean and applying that to individual data points. I don’t know if I should admit this, but, in the sense you describe, statisticians do “have their cake and eat it too” — it is standard practice in time series analysis. For example, one often begins a data analysis by testing the distribution of errors assuming the sample is iid — before settling on a time series model. Then one looks at possible time-series models, rechecking the distribution of model errors based on the hypothesized model, etc., etc. It’s called model building. Perhaps it is indiscrete of me to mention this… It does raise a question: How would you develop error bars for a non-trivial ARMA — let’s start with an AR(1) — time series with Poisson errors? 83. IL Don’t know about a coffee TAC, perhaps we could have a beer or two…. I’m not really trying to get in the last word, but I think the physics IS the same. OK, in a literal sense, radioactivity is due to quantum fluctuations and tunnelling and hurricanes are a macroscopic physical process but what is fundamental to the problem and why I think that what you and Bender suggest is inappropriate is that there is a very small probability of hurricanes arising in any given area at any given time, its only when we integrate over a large area – ocean basin and long time (year) that we find up to several hurricanes. Each – on the treatment above – is a random, independent event caused by a low probability process which is why the statistics and the underlying physics of that statistics are the same as these other areas of physics. Getting climate scientists like Judith Curry to discuss that inherent uncertainty in years’ counts would be really interesting. Having said that, and this is where it could get really interesting, as Margo pointed out some time back, there is a possibility that hurricane formation is not independent, that the more hurricanes there are in a year, the more predisposed the system is to form more through understandable physical mechanisms. That would take us to a new level of interest but since I see conclusions on increasing hurricane intensity based on 11 year moving averages with no apparent discussion of the inherent uncertainties in a probability system like this, I think there is a long way to go before we can tackle such questions. 84. Paul Linsay #80, UC (1) sqrt(0) = 0, no error bars, just a data point (2) once N is large enough, about 10 to 20, the difference between Poisson and Gaussian becomes small. The Gaussian has mean N and variance N. The error bars would still be +- sqrt(N) at one sigma. Ritson used to be an experimental particle physicist, my training and career for a while too, so I’d expect that he would understand the way I plotted the data and error bars. 85. bender The problem with climate time-series data like these hurricane data is that you have one instance, one realization, one sample, drawn from a large ensemble of possible realizations of a stochastic process. You want to make inferences about the ensemble (i.e. all those series that could be produced by the terawatt heat engine), but based on a single stochastic realization. Any climate scientist who does not understand this – and its statistical implications – should have their degree(s) revoked. In contrast, physical time-series data that are generated by a highly deterministic process do not face the same statistical challenge. Often the physical process is so deterministic that you never stopped to think about the existence of an ensemble. Why would you? 86. Francois Ouellette Hey people, off topic I know, but seeing what good work you amateurs are doing, I can’t resist citing this little gem, which seems taken directly from RealClimate: It must be almost unique in scientific history for a group of students admittedly without special competence in a given field thus to reject the all but unanimous vertict of those who do have such competence. This was from G. G. Simpson, talking about proponents of Continental drift in 1943…. (quoted in Drifting Continents and Shifting Theories, by H.E. LeGrand, p. 102) 87. jae I got lost. Did you guys agree whether “error bars” should be put on the count data? 88. Posted Jan 9, 2007 at 1:11 PM | Permalink | Reply #87 No, didn’t agree. But I’ll try to find the book suggested in #11 and learn (found William Price, Nuclear Radiation Detection 1964, will that do? ) 1) and 2) in #84 makes no sense to me (*), but I’m here to learn. I agree with #85. (*) except ‘difference between Poisson and Gaussian becomes small’, but replace 10 to 20 with 1000 89. IL #87 No, I guess not. But there is no way I am wrong – or my name isn’t Michael Mann 90. bender A count is not a sample; it is an observation. Observations are subject to measurement error, not sampling error. Sample means are calculated from sample observations (n gt 1) and are subject to sampling error. We do this because we want to compare the known sample mean to the unknown population mean. In stochastic time-series the population being studied/sampled is special, in that it is virtual and it is infinite: it is an ensemble. In stochastic time-series you are trying to draw inferences about a system’s ensemble behavior, but you have to do that with a single (long!) realization, and you have to invoke the principle of ergodicity: the sample statistics converge to the ensemble statistics. If your series is short, or if the ensemble is changing behavior behavior as you study it, then you will not get the convergence required to satisfy the ergocidity assumption. Then you are in trouble. So … why on earth would you apply sampling error to a set of observations when the thing that produces them is a highly stochastic process that only ever gives you one (possibly nonstationary) sample? I agree with me. 91. jae I remember just enough about statistics to be dangerous (maybe I could be a consultant for the Team? ), but I think Bender is right. A hurricane count is simply an observation, not a collection of observations, like a sample. Thus, how can you justify applying a statistical parameter to it? 92. IL Ok, its a fair cop, my name is not Michael Mann so given what Bender has just posted in #90, maybe this one is just going to run and run, I had better not try and move on. A count is not a sample; it is an observation. Observations are subject to measurement error, not sampling error. Its not a sampling error and it is not a measurement error!! Its part of the fundamental physics of the process. Observations produced by random processes with small probability are subject to considerable uncertainty! Yes, the observation that there were 15 Atlantic named storms last year (or whatever the number actually was) is an exact number, there were 15, no more, no less, if none were missed by all the satellites, ships and planes. But so what??! There is nothing magic about that number 15 even though its an exact observation. The conditions in the ocean basin were not so constrained that it had to be 15 with a probability of 1! If conditions remained exactly the same it could easily have been 14, or 13 or 17 – and we can calculate the probability that that number of 15 has come up purely by chance and we can also calculate the probability that any given number of named storms ranging from 0 to as large as you like – and including 15 could have occured last year given that 15 were observed. That is what Paul plotted and I think this is where the disconnect in our talking to each other is occuring. The fact that there were 15 does not mean that there was probability of unity of the number 15 occuring. So we calculate the probability that 15 could have occured even though 15 were observed!! I’m sure that to some that will still sound a bit gobbledegook but think about throwing a dice. I throw it and throw a 2. Its an exact number and an exact observation but the chance of me getting that 2 is not unity, its 1/6 so I can calculate the probability that that 2 came up by chance. (I know this is not a good analogy for the storms since with the dice we have 6 numbers each with equal probability but that is the principle). That treatment is fundamental to understanding the nature of the process and the inherent uncertainties when you have a process generated by such a fundamentally random process that gives you a few observations per year. Ok, storm over and calming again – to try and answer jae’s question. I hope I don’t put words in TAC’s mouth or anyone elses for that matter but I think we fairly well agree on fundamentals about uncertainties when we want to compare observations over time to test for trends etc, the difference (pace what I said to Bender above) seems mainly to be communication and how you present data. I think that we are agreed that moving 11 year averages with no consideration of these sorts of uncertainties is definitely not correct. 93. Posted Jan 9, 2007 at 3:12 PM | Permalink | Reply Margo pointed out some time back, there is a possibility that hurricane formation is not independent, that the more hurricanes there are in a year, the more predisposed the system is to form more through understandable physical mechanisms. Someone did say this, but I’m afraid it wasn’t me! I thought it was an interesting idea. (Since I’ve used that word with irony here before, I think I should say I mean interesting in a good way.) I’m afraid I don’t know if one hurricane forming affects the probability of another one forming later on. So … why on earth would you apply sampling error to a set of observations when the thing that produces them is a highly stochastic process that only ever gives you one (possibly nonstationary) sample? I don’t think you would. But, you might illustrate the estimated measurement uncertainties in some cases. So, the if the “official” count recorded for a given year can hypothetically differ from “real number”, then, there might be cases where you want to show this. As it happens, when I see graphics, I’m content if they capture the major factors contributing to uncertainty. In the case of hurricane counts, if the annual numbers are presented unfiltered, I don’t usually feel the need for anyone to add the “measurement uncertainty” to the hurricane count for each individual year. But if someone averages or smooths the count, then you bet I want to see uncertainty interval. (Better yet, come up with “error” bars that account for both the statistical uncertainty in the mean and the measurement uncertainty. There are techniques for this.) Basically, you want “honest graphics” that convey a reasonably decent estimate of the uncertainty. 94. Posted Jan 9, 2007 at 3:23 PM | Permalink | Reply #90 , you say (n gt 1) , why (n greater than or equal 1) would not work? Nevermind, I withdraw my agreements and disagreements, short time-out for me. 95. IL #93 Sorry Margo, my bad again. Sadly, senior moments have a strong correlation with time and have long since moved out of where I can describe them by Poisson statistics but instead by Gaussian with high (and rising) mean. I’ve just searched and it was Sara Chan, comment 39 on the Judith Curry on Landsea 1993 thread – the thread that spawned all of this. Where is Judith Curry anyway? I would really like to know what she makes of these discussions. 96. Steve McIntyre #93. I don;t see why hurricane formation would necessarily be independent. If you drive a motorboat through water, you get a train of vortices. I know the analogy isn’t very close, but why would it be impossible that one vortex wouldn’t prompt subsequent vortices. My guess as to a low 2006 season was based on this analogy. 97. bender Re #96 Spatial patterns of vortices lead to temporal patterns of anti-persistence and, therefore, statistical non-independence (at least at some space-time scales). Logical. 98. bender Re #94 If n = 1, what do you get for a standard deviation? 99. EP Regarding the error bars (Fig 1): if a frequency is determined then the error must be a result of categorising the event as a hurricane or not. If it’s based on wind speed then does that mean the error bar is the propagation of errors for the given hurricanes in question? How were the errors combined for data collected by (presumably) various measuring schemes over the decades? 100. TAC Paul, IL, bender, UC, jae and all: #87 asks: “Did you guys agree whether “error bars” should be put on the count data?” Well, I think the answer is we have some work to do. I was semi-serious when I suggested in #81 that: Perhaps we could come up with a whole new graphical method for plotting Poisson time series. Why? Well, I think IL is correct that (#92) “we fairly well agree on fundamentals about uncertainties” and that our differences relate primarily to “how you present data.” However, I also agree entirely with bender that the error bars are wrong because, among other things, they violate a convention of time-series graphics (#2, #9, #14, #36) and are likely to be misinterpreted. However, apparently IL and Paul are used to viewing count data this way and, for them, the error bars do not present a problem. Nonetheless, I imagine they can accept the idea that some of us find the error bars confusing if not offensive. This leads me to think we need a new graphical method that we can all agree to, something unambiguous, compact, beautiful and expressive. We have plenty of creative talent right here at CA to do this ourselves, and I am not aware of any prohibition on contributing constructively to the science. We are not just auditors That’s my \$0.02, anyway. 101. Posted Jan 9, 2007 at 7:42 PM | Permalink | Reply SteveM Re 96: I also don’t see why a hurricane occurring right now might not affect the probability of a hurricane forming a short time later. I just don’t happen to know. I could speculate but the physical arguments in my own speculation would sound like mumbo-jumbo — even to me. As long as you mentioned the Von Karman vortex street, voila: (The solid object is an island! ) 102. Posted Jan 9, 2007 at 8:04 PM | Permalink | Reply Shoot! I hope this shows. 103. bender Re #100 This is child’s play for a heavy like Wegman. That’s why I don’t bother. There are people who are already paid to solve these problems. Why are they not solving them? Why does it take volunteer efforts? 104. IL #103 I agree with bender on this point, everything we have been debating for hundreds of comments over several threads must be well known to professional statisticians, thrashed out in papers and books. I don’t know that, but since Poisson statistics has been around for nearly 200 years, all of these things must have been well chewed over. #100 TAC – I can perfectly well accept that there are different ways of viewing the world, it sounds like Paul and I are coming from a physicist’s viewpoint and need to understand the world from underlying physical principles. As long as we all accept the fundamental uncertainties given by the physics and calculate probabilities and confidence limits correctly when we calculate if there is a significant change with time etc then, ok, I can live with people wanting to present the data in a different way. What you want would get thrown out of a physical science journal though because its ‘unphysical’ #96 Steve, yes, hurricane formation may indeed not be truely independent, I and others mentioned this a little. Unless the correlation becomes very high though so that hurricanes are more or less forming as soon as one is leaving a formation area in the ocean it will be extremely hard to tell. Behind all this debate are the small numbers of hurricanes per year and a small number of year’s worth of data that makes the uncertainties so large, its very difficult to study anything at all. Nobody has really responded to my hissy fit in #92, does this make sense to you? Or did you understand this all along? 105. Posted Jan 10, 2007 at 5:03 AM | Permalink | Reply Ok, too interesting, but other work to do, one more post;) #90 http://en.wikipedia.org/wiki/Ergodic_hypothesis says The ergodic hypothesis is often assumed in statistical analysis. The analyst would assume that the average of a process parameter over time and the average over the statistical ensemble are the same. Right or not, the analyst assumes that it is as good to observe a process for a long time as sampling many independent realisations of the same process. The assumption seems inevitable when only one stochastic process can be observed, such as variations of a price on the market. That the hypothesis is often erronous can be easily demonstrated [1]. I’m not very familiar with this, but I know that if we have a stationary process (strict sense), we can estimate the finite dimensional distributions from one (long) realization. So, in this context, I think it makes no difference if we speak about ergodicity or stationarity (if math gurus disagree, cases of singular distributions etc, pl. tell it now). Stationarity is easier concept for me (for some unknown reason). #90,91 why the sample size cannot be 1? With sample size of one you cannot estimate standard deviation of Gaussian distribution, but Poisson distribution is different case. Paul, IL: Price, Nuclear Radiation Detection has a chapter ‘Statistics of detection systems’. As an example, there is a data from 30 separate measurements, each taken for a 1-min interval, Geiger-Muller counter. (I can post the data later, if needed). Average is 28.2 counts. In the usual case the true mean is not known. Rather, a single determination of n counts is made. This value is reported as n +- sqrt(n). The meaning of this precision index is that there are only about 33 chances out of 100 that the true average number of counts for this time interval differs from n by more than sqrt(n). It is assumed that n_i=(approx) mean(n)=(approx) lambda=sigma^2. Using the example data it is found that 27 % of the n_i +- sqrt(n_i) limits do not contain the mean(n). But the story continues: If one is dealing with a series of counts, each of which is for the same time interval, mean(n) is the best value for the time interval employed. And TAC said in #77 Under the null hypothesis we have one population (of 63 iid Poisson variates). To estimate lambda, just add up all the events and divide by 63. I see no conflict here, with sample size of one you have to use n_i=(approx) mean(n)=(approx) lambda=sigma^2 but with larger sample size you average them all. And no conflict with my #80 either, we are not very interested in n_i per se, we want more general estimate (capability to predict, or to reconstruct the past, for example). To me, the Figure 1 looks like a result of model observation = the true lambda + error where the error term distribution is Poisson(lambda)[lambda+x], E(error) is zero and Var(Error) is lambda. Each year there is a new lambda, and past lambdas don’t help in estimating it. Each year a new estimate of error variance is obtained from the observation itself. And that’s why I think it is a confusing figure. 106. Willis Eschenbach UC and TAC, thanks as always for your thought provoking posts. I got to thinking about your statement that: If one is dealing with a series of counts, each of which is for the same time interval, mean(n) is the best value for the time interval employed. and TAC’s statement that: Under the null hypothesis we have one population (of 63 iid Poisson variates). To estimate lambda, just add up all the events and divide by 63. It seemed to me that we could estimate the mean in a different way, which is that the mean is the value that minimizes the RMS error of the points (using the usual Poisson assumption that the variance in the dataset is equal to the mean). Using this logic, I took a look at the RMS error. Here is the result: The minimum RMS value is at 8.9. I interpret the difference between the arithmetic mean and the mean that minimizes the RMS error as further support for my conclusion that lambda is not fixed, but varies over time … and it may say that we can reject TAC’s null hypothesis. w. 107. David Smith A question: suppose that Atlantic storm count is affected by a random process (El Nino / La Nina) as well as a trend (SST). Would that be detectable by this analysis? (Pardon my likely poor posing of the question but I hope the gist of it is apparent.) There’s evidence that year-to-year count is strongly affected by El Nino, which appears random. There’s also the thought that SST affects count, which is believed to be a strong effect by some (Webster etc) while others (like me) think there’s probably a weak effect. 108. TAC #100, #103, #104: I admit developing a new graphic was fanciful. As bender and IL note, it is someone else’s job. #105 When bender (#90) talks about n .gt. 1, I think he’s using “n” to denote the number of Poisson observations (each observation would be an integer .ge. 0). When n=1, we have the non-time-series case that IL and Paul (I think) are used to working with. However, we have also used the letter “n” to denote the number of arrivals, and there is an interesting issue here, too. I’ll now use K, instead, to denote a Poisson rv, and k to denote an observed value of K. The most obvious problem with the \sqrt{K} formula occurs when k is zero (#77). In that case, applying the “\sqrt{K} reasoning” yields an arrival rate of zero with no uncertainty. Whatever our disagreements, I think we can agree that this is nonsense. Also, it should be troubling given that, for Poisson variates, K=0 is always a possibility. IL: I’m still having a hard time understanding what is meant in #79 by …physically this is exactly like radioactive decay or finding the pennies that I described or admissions to a hospital or any of those similar situations where counting statistics applies – the underlying physics is the same… For me, I cannot even see how the physics of a die and the physics of a random number generator are the same. However, if you want to argue that a rolling cube and a CCD detector in a dark room have the same physics (and then there’s the hospital), I’m all ears. What is really going on here? I think IL may have defined statistics as a subset of physics — I guess that’s his prerogative — in which case the result is trivial. However, statisticians might now see it that way; they tend to draw the lines somewhat differently. They talk about events (for the die, the event space looks something like {.,:,.:,::,:.:,:::}) which are governed by physics, and the corresponding random variables (which take on values like {0,1,2,3,4,5,6}) which have statistical properties that can be considered without reference to physics. As for the rest, I think most of the arguments have been made. I still agree with bender; I don’t like the error bars. I see small problems with the error bars (e.g. as defined, when k=0 they don’t work); I see medium-sized problems with the error bars (where the estimated error and the estimated statistic are correlated, unsophisticated (e.g. eyeball) statistical tests and confidence intervals will tend to be biased toward rejecting on the left (btw: this bias is connected with Willis’s RMSE estimator in an interesting way)); and then there’s the BIG problem: Potential misinterpretation. They also add clutter to the graphic and require explanation. Overall, not a good thing. However, I don’t hold out much hope that repeating these arguments, or bender’s arguments (which I also happen to agree with), will change any minds. 109. Ken Fritsch Re: #107 A question: suppose that Atlantic storm count is affected by a random process (El Nino / La Nina) as well as a trend (SST). Would that be detectable by this analysis? David S, I can give you my layman’s view (and repeat myself as I am wont to do) and that is that TAC’s comment in #56 and quoted below would indicate that a statistically significant trend is not found. I also believe that the point has been made that the use of lower frequency filtering needs to be justified before applying and that the filtering application, if justified, must make the necessary statistical adjustments (to neff). Note that in this case all three tests, as well as an eyeball assessment, find no evidence of trend (the p-values for the 3 tests are .35, .71, and .61, respectively (#38)). I think we can safely conclude that whatever trend there is in the underlying process is small compared to the natural variability. The remainder of the discussion (which comprises most of it) comes by way of a disagreement on the display of error bars and the thinking behind it. From a layperson’s view, I agree with TAC and Bender on the matter of the thinking behind the error bars and have appreciated their attempts to explain the interplay of stochastic and deterministic processes and the appropriate application of statistics. My agreement may be because this is the approach to which I am familiar. Perhaps it is my layperson’s view, but I am having trouble understanding the other approaches presented here and their underlying explanations. I am not even sure how much of the differing views here result from looking at deterministic and stochastic processes differently. This discussion has been very friendly compared to some I have experienced on this subject. I do think that there is a correct comprehensive view of how statistics are applied to these processes and not separate deterministic and stochastic ones. 110. bender To sew these threads up we should get back to the project we were working on prior to the publication of Mann & Emanuel (2006), which would require translating John Creighton’s MATLAB code (for orthogonal filtering) into R and applying it to these data. Because when you account for the low-frequency AMO (similar to what Willis has basically done), and Neff, and the Poisson distribution of counts (as Paul Linsay has done) I am sure that what you will find is no trend whatsoever. The graphical display would be cleaner and more correct than Linsay’s here, but would still prove his basic point: this is a highly stochastic process which is statistically unrelated to the CO2 trend (but might be related to a decadal oscillatory mode that is a primary pathway for A/GW heat exchange). 111. Willis Eschenbach Well … nothing is as simple as it seems. I had figured that if the standard deviation as used by Paul in Figure 1 was an estimator of the underlying lambda, I could use that to figure out where lambda was as I did in post #106 above. This showed that lambda estimated by that method was smaller than the arithmetic mean. However, the world is rarely that simple. Having done that, I decided to do the same using R with random Poisson data, and I got the same result, the lambda calculated by the same method is smaller than the actual lambda … so I was wrong, wrong, wrong in my conclusions in #106. However, this also means that the use of sqrt(observations) as error bars on the observations leads to incorrect answers … go figure. w. 112. Paul Linsay #110, bender. For fun I showed Figure 1 so some of my former physics colleagues. Nobody even blinked. The only point anyone made was that the error bars for very small n should be asymmetric because of the asymmetric confidence intervals for the Poisson distribution. Which I knew but didn’t want to bother with for an exercise as simple as this. In any case, the error bars as plotted (with the asymmetrical correction at small n if you want to be fussy) are the values needed to fit the data to any kind of function. They have to be carried through into any smoothing function like the running average used by Curry or Holland and Webster. For fun I’ve also looked at the data back to 1851 without bothering about possible undercounting. The mean drops to 5.25 hurricanes/year from 6.1 but the data still looks trendless. The distribution and overlaid Poisson curve match as well as in Figure 2. With 156 years of data it provides an interesting test of the Poisson hypothesis. The probability of seeing a year with no hurricanes is exp(-5.25) = 0.0067, quite small. But in 156 years I’d expect 156*exp(-5.25) = 1.05 years with no hurricanes. In fact, there are two years, 1907 and 1914, that have no hurricanes. 113. EP Surely a quick and easy way of showing a trend would be to plot the Poisson distrib. for several time periods, say every 40 years? Then you could see if the mean shifts. 114. bender Re #112 Send your physics colleagues here and maybe they’ll learn something about robust statistical inference if they read my posts. Those error bars are meaningless in the context of the only problem that matters: hurricane forecasting. People who don’t blink scare me. 115. Paul Linsay be afraid, verrrry afraid. 116. David Smith Re #109 Ken, thanks. That’s about what I gathered. One day I’d like to learn about the statistical characteristics of processes which are driven by both random and trended factors. 117. bender 115 Yes, well, I suppose you have nothing to lose being wrong, so why should you be afraid. Go ahead and mock me. Just be sure to send your physics friends here. 118. IL TAC #108 IL: I’m still having a hard time understanding what is meant in #79 by For me, I cannot even see how the physics of a die and the physics of a random number generator are the same. However, if you want to argue that a rolling cube and a CCD detector in a dark room have the same physics (and then there’s the hospital), I’m all ears. What is really going on here? I think IL may have defined statistics as a subset of physics “¢’‚¬? I guess that’s his prerogative “¢’‚¬? in which case the result is trivial. I guess I’m not quite in the category of Lord Rutherford who said ‘All science is physics or stamp collecting’ but maybe my view of physics is more catholic than most (here anyway). What I meant was that although radioactive decay, the hurricanes and the hospital admissions have different physical processes, (quantum fluctuations/heat engine of the ocean/infection by pathogens) neveretheless, when you strip each to its bare essentials, they are working in the same way. The probability of any given radioactive atom decaying in a certain time is completely minute, but there are a vast number of atoms in the lump of radioactive material so that the tiny probability times the number of atoms gives a few decays per second (say). The probability of a hurricane arising in a particular area of ocean at a particular time is tiny but when you add up all those potential hurricane forming areas over the whole of an ocean basin and over a long enough time you get a few hurricanes per year. The probability of any one of us as an individual getting a pathological disease is really tiny, but there are a lot of people so a hospital sees a small but steady stream of people each day. (I say steady, what I mean is a few each day but the numbers who are admitted each day fluctuates according to Poisson statistics). (Of course, if the events are no longer random with small probability such as if an infectious disease starts going through a neighbourhood then the Poisson distribution breaks down. The same would happen if hurricanes formed at a higher rate so that the formation of one affected the probability of another forming). Whenever you have a small probability of something happening to an individual (person/area/thing etc) but an awful lot of persons/areas/things then you get Poisson statistics. I call that stripping a problem to the essentials ‘physics’, (maybe I do follow Rutherford after all), but that’s not important, what is, is that understanding of the inherent and large uncertainties in the system. Re #112 Send your physics colleagues here and maybe they’ll learn something about robust statistical inference if they read my posts. Those error bars are meaningless in the context of the only problem that matters: hurricane forecasting. People who don’t blink scare me. Strictly peaking of course, you are right that the error bars on the graph are not necessary for ‘robust hurricane forecasting’, you can correctly study the statistics of the sequence of numbers without plotting the error bars on figure one BUT THEY ARE THERE! and if you want to understand the underlying physics (there I go again) of a problem, ie its most fundamental essentials, I, and Paul and clearly from Paul’s colleagues of a ‘physics’ pursuasion, these are the sorts of things you need to think about. From many of your posts and from others’ posts here, I still don’t think many people here fully understand the underlying principles and fundamental large uncertainties when you have a random process at work. 119. IL #112 In any case, the error bars as plotted (with the asymmetrical correction at small n if you want to be fussy) are the values needed to fit the data to any kind of function. They have to be carried through into any smoothing function like the running average used by Curry or Holland and Webster. Yes, yes. Absolutely. Why can’t this be seen? If bender and others are calculating ‘robust time series’ correctly (I am not doubting that bender does, I don’t think Judith Curry does), then this is all implicit in their work even if they don’t realise it – Paul and I are just making it a bit more explicit. 120. TAC #119 Il: I think I appeciate what you mean by I’m not quite in the category of Lord Rutherford who said “All science is physics or stamp collecting’ but maybe my view of physics is more catholic than most. In its dedication to a search for grand theories, for unifying explanations of seemingly unrelated phenomena, physics is magnificent. When I wrote (#108) “I think IL may have defined statistics as a subset of physics,” that is what I had in mind. It is a great and noble thing. However, I hope you appreciate that statisticians sometimes see things differently. For example, statisticians will bristle when they read what you endorsed (#119) “fit the data to any kind of function.” You see, statisticians, in their parochial ways, believe that one fits functions to data — never the other way around. However, based on a sample of N=2, can I conclude that physicists do not subscribe to this principle? [OK: That last part was undeniably snarky]. Anyway, I think we are mostly in agreement. When are we having that beer? 121. Posted Jan 11, 2007 at 10:11 AM | Permalink | Reply #107 If we deal with a random variable X whose distribution depends on a parameter which is a random variable with a specified distribution, then the random variable X is said to have a compound distribution. One such example is negative binomial distribution (The lambda of Poisson distribution is specially distributed rv): http://en.wikipedia.org/wiki/Negative_binomial_distribution Hey, they mention ‘tornado outbreaks’.. AGW oriented model would be of course lambda=f(Anthropogenic CO2) (which is a possible alternative hypothesis to our H0: lambda=constant, which we have been testing here many days). And by figure 1 Paul says that a lot of very different lambda-curves could be well fit to this data. Paul, sqrt(0) = 0, no error bars, just a data point Let’s put this value to Price’s text: This value is reported as 0 +- 0. There are only about 33 % chances out of 100 that the true average number of counts for this time interval differs from 0 by more than 0. 1) Not very true, underestimates the percentage 2) No error bars: people get an idea that this is something exact, something completely different with in the other cases (1,2,3,..). +-sqrt(n) is a confusing rule for me, that’s all. But I think I understood your message now. 122. Ken Fritsch Re: UC request in comment #3 added error bars equal to +- sqrt(count) as is appropriate for counting statistics. Not very familiar with this, any reference for layman? In keeping with my obsession to retain the major points of threads such as this one in a reasonable summary, I would like to see any reference presented to answer UC’s question from very early in the thread — as my perusal failed to come up with one. I would like to add a request to see a reference that handles the 0 count with this approach. The reference should be for an application other than radioactive decay. Also I assume we are talking here, not about counting error as in radioactive decay for many independent measurements where if you have only one measurement the counting error is N^(1/2), but many measurements from the same system where the mean becomes N bar and the standard deviation becomes (N bar)^(1/2) which are the mean and standard deviations for a Poisson distribution as derived from the all of the data points. 123. Tim Ball Once a contained weather system begans moving over the surface of the earth it is subjected to the factors created by movement over a rotating surface and also the movement of an object through a uniform medium. The speed within the hurricane has been discussed, but we also need to consider the speed with which the system moves over the surface. The deflection of the trajectory of a system as it moves away from the equator is affected by increasing coriolis effect and an important part of this is changing angular momentum (am). The latter influence (am) varies with the speed of the system. The photograph Margo provides (#102) apears to indicate the second factor and that is sinuosity. There is clear sinuosity in the circumpolar vortex and in the flow of the Gulf Stream and North Atlantic drift. It is logical to assume that a weather system moving through the uniform medium of the atmosphere will be subjected to sinuosity. As I understand nobody has effectively explained the development of sinuousity. The best explanation I have heard is that it is the most efficient way of moving from A to B with the least amount of energy used – a natural conservation of energy process. 124. bender I see the problem now. Two issues have been conflated here. I have been arguing about what kind of graphical representation and error structure is required to make robust inferences about changes in the ensemble mean number of hurricanes expected in a year. The physics people are concerned about propagation of error, arguing that if you are going to use some observation in a calculation you need to know the error associated with the observation and carry that through the calculation. I won’t disagree at all with the latter, but I would add that you had better understand the former if you want to understand what it is the hurricane climatologists are asking. My point is that it doesn’t make sense to treat an observation as though it were a representative sample of the ensemble. You physics people need to think about what it means to infer a trend based on a sample realization drawn from a stochastic ensemble. Until you understand that you will continue to bristle at my comments. 125. bender Sampling error and measurement error are not the same thing. 126. Willis Eschenbach Tim B., thanks for the post. You say: As I understand nobody has effectively explained the development of sinuousity. The best explanation I have heard is that it is the most efficient way of moving from A to B with the least amount of energy used – a natural conservation of energy process. Sinuosity is quite well explained by the Constructal Law. This Law actually explains a whole host of phenomena, from the ratio of weight to metabolic rate in mammals to the general layout of the global climate system. There is a good overview at the usual fount of misinformation, I think William Connelly hasn’t realized that Bejan’s work covers climate. The Constructal Theory was developed primarily by Adrian Bejan. His description of the theory is here. A two page Word document, The design of every thing that flows and moves, is a good introduction to the theory. He is one of the 100 most highly cited engineers in the world. His paper: Thermodynamic optimization of global circulation and climate Adrian Bejan and A. Heitor Reis INTERNATIONAL JOURNAL OF ENERGY RESEARCH Int. J. Energy Res. 2005; 29:303–316 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/er.1058 is a clear exposition of the major features of the global climate from first principles. Sorry to harp on this, but Bejan’s work has been wildly under-appreciated in the climate science community. w. 127. IL #120 TAC Steve has my email. Probably about time to go back to lurk mode. 128. TAC IL, Paul, bender, et al. I enjoy debating with smart people, and this time has been a particular pleasure. I think the discussion helped clarify the issues, and made me aware of things I had not thought about in a long time, if ever (bender summarized it elegantly with “Sampling error and measurement error are not the same thing” — but they are both real and they both matter). I’m disappointed that no one jumped on the opportunity to point out, gratuitously, that “fitting data to functions (models)” seems to be SOP among some climate scientists. I thought that qualified as “low-hanging fruit”; you guys are so polite! Thanks! TAC 129. Posted Jan 13, 2007 at 2:58 AM | Permalink | Reply #47 Willis, Not sure about sinusoidal distribution, but how about Poisson distribution where lambda is a function of some other random process (see 107,121, and I think the paper Jean S linked in #19 is relevant as well). I noted that there are extremes that don’t fit well to your histograms (3rd Figure) – and these extremes are not necessarily recent ones (1887, 1933). One way to model overdispersed count data (variance greater than the mean) is using the negative binomial distribution (lambdas follow Gamma distribution), but in this case some stochastic process as a ‘lambda-driver’ would probably do. And now we are so close to Bayesian data analysis, so I have to ask: Can anyone give a predictive distribution for future hurricanes given the SST? i.e. yearly p(n|SST). How different it would be from John A’s Poisson(6) ? We don’t know the future SST, but we can plug those values in later. And the same for global temperature, give me p(T|CO2,Volcanic,Solar). I will check the accuracy of our knowledge ten years later with realized CO2, Volcanic and Solar values. TAC, I wrote an example on fitting data to functions #63 in http://www.climateaudit.org/?p=1013 My model: “Named Storms’ is i.i.d Gaussian process. 2005 is over 4 sample stds, astronomically improbable. My models are never wrong, so 2005 is faulty observation. Outlier. Removed. But let’s not blame climate scientist for everything, this kind of fitting was invented earlier that climate science, I think:) 130. Tim Ball #126 Thanks Willis: It appears this is the information I had heard about, namely that sinuosity is an atttempt to maximum energy efficiencies by overcoming restrictions such as friction throughout an entire sysem. I would still like some response to my other points about angular momentum and sinuosity as applied to the movement of hurricanes. The deflection of all the tracks to the right as they move away from the equator in the Northern Hemisphere is mostly a function of adjustment to changing rotational forces. The degree of adjustment is a function of the speed of the entire weather system. Depending on which way this macro guidance sends the system then determines the geophysical and other factors that will come into play. I realize this is not statistics, but the number of occurrences, such as US landfall of hurricanes, intensities achieved, and many of the factors being dicussed here, are directly determined by them. 131. Ken Fritsch Re: #129 Interesting point. Bill Gray uses past TS data to construct a predictive model for forecasting TSs weeks ahead of the season and then massages the data again to “adjust” his predictions. The way I look at what he has accomplished is that the predictive power of the advanced model is not statistically significant but that closer to the event prediction is. I would think that someone must have published models for hurricane events using past data without the attempt to be predictive, i.e. after the fact. Gray uses numerous variables in his predictive models and SST, as I remember, plays a part, albeit a small one. As I recall Gray rationalizes his use variables by attempting to explain the physics involved. Modern computer models are used to predict TSs but have very little out-of-sample results to judge them by. What we really need, at least as a starting point ,is a computer model that uses the actual conditions at the time of the TS event as predictor of TSs. But are not we getting a bit ahead of ourselves when data seem to indicate a trendless line of TSs versus time. Re: #130 I am sure you know better than I, but is it not a fact that one thing computer models have been able to accomplish with some success is to predict the tracks of TSs. What inputs do they use? 132. Tim Ball #131 Ken, thanks for the response. The ability to predict track and speed has not been very successful, especially when you consider the limited range of directions. That is, they all move in a general pattern in one relatively small quadrant of the compass. In addition, the predictions of different computers of a single hurricane vary considerably. 133. jae Oh, good grief. Hurricanes are one of the ways that Earth dissipates heat. When the SST is hotter and atmospheric conditions (wind speeds, shear, etc.) are at the right level, the hurricanes increase in number and strength (as well as “simple” thunderstorms). It has to be related, in the long run, to SST. I’ll bet there were a lot more severe hurricanes during the MWP. It’s too bad we don’t have reliable proxies for past hurricanes. 134. David Smith RE #132 The computer models use standard meteorological inputs and generate a path and intensity prediction. One thing that’s been found is that an “ensemble” (average prediction, or range of predictions, from many models) is often better than the prediction from just one model. It’s also been found that flying jets into the surrounding atmosphere to gather data results in much-improved forecasts. It seems that the computers suffer from GIGO, which is not a surprise. I am very interested in seeing the European (ECMWF) computer sstorm season predictions this year. As I understand it, they let the computer run months of weather map predictions and then count the storms the computer generates. Good luck with that. 135. Ken Fritsch What I have been attempting to point out about models used to predict a TS event or its path is that the predictive capabilities appear to improve as data from actual current or near current time conditions are used to continually readjust the predictions. Longer term predictions have to first make educated guesses as to what these conditions will at a future time and then use those “guesses” of conditions to determine the probability of a TS event or probability of the direction of its path. What I would like to see is how well do these models perform if they have all the data of the existing conditions for an incremental step and then as conditions change how well they perform for the next step and so on. In other words how well can they simply process current data by excluding the prediction of conditions? 136. David Smith A plot of NHC storm forecast errors is shown here . The forecasts are one-day to five-day. For example, the chart shows that the typical error for the 2-day (48 hour) forecast is about 100 nautical miles. These can be thought of as computer + human forecasts. As shown, the farther into the future the forecast goes, the greater the cumulative error. I will say from watching many storms that, beyond five days, the forecasts are almost useless. This is why I’m fascinated to see what the Europeans will forecast from their computer-generated storm seasons. The computer-only performance is shown here , for the 2-day forecast. The computers do a little worse than computer + human, but they are improving. Interestingly, the Florida State Super Ensemble (FSSE) does as well as, or slightly better than, the human + computer performance. My understanding is that the FSSE looks at all computer models and considers their historical error tendencies in making its forecast. 137. Tim Ball #134 By implication then the problem is not enough models. More models and therefore better approximations. I also note the comments about better accuracy as the actual event approaches. This is the practice I see in Canadian forecasts. I call it progressive approximation. With regular weather forecasts I understand that if you say tomorrow’s weather will be the same as today you have a 63% chance of being correct. This is based on the rate of movement of weather systems which generally take 36 hours to move through. Hence the probability of the weather being the same in 12 hours is 63%. Surely lead time is essential in forecasting for extreme events to provide time for evacuation or other reactions. How many times will people pack up and leave when there was no need? 138. Willis Eschenbach Tim Ball, you say in #130: #126 Thanks Willis: It appears this is the information I had heard about, namely that sinuosity is an atttempt to maximum energy efficiencies by overcoming restrictions such as friction throughout an entire sysem. Actually, it sound like you are talking about something different, the minimization of entropy. The Constructal Law is something different and much more encompassing. It was stated by Bejan in 1996 as follows: For a finite-size system to persist in time (to live), it must evolve in such a way that it provides easier access to the imposed currents that flow through it. The basis of the theory is that every flow system is destined to remain imperfect, and that flow systems evolve to distribute the imperfections equally. One of the effects predicted by the Constructal Law is the one that you have alluded to above, the maximization of energy efficiencies. The Constructal Law predicts not only the maximization, but the nature and shape of the resulting flow patterns. Because of this power, it has found use in an incredibly wide variety of disciplines. See here for a range papers utilizing construcal theory from a variety of fields, including climate science All the best, w. 139. David Smith RE #137 I think the key to using multiple models in an ensemble is to know their weaknesses and then make an adjustment for those biases. The GFS model, for instance, may be slow at moving shallow Arctic air masses, so ignore it on those and look at the other models. The NAM model continuously generates a tropical storm near Panama during the hurricane season, so ignore it on that regard. And so forth. I think that’s what the ensemble method does. Seems, though, that the better approach is to fix the models. I have a question which you or someone else might be able to help me answer. The question is, why doesn’t the temperature in the upper Yukon (or other snow-covered polar land) on a calm night in the dead of winter fall to some absurdly low temperature, like -100C? It seems to me that there is little heat arriving from the earth, due to snow cover, and little or no sunlight, and (often) clear skies allowing strong radiational cooling What brakes the cooling? Thanks. 140. Demesure I have a question which you or someone else might be able to help me answer. The question is, why doesn’t the temperature in the upper Yukon (or other snow-covered polar land) on a calm night in the dead of winter fall to some absurdly low temperature, like -100C? It seems to me that there is little heat arriving from the earth, due to snow cover, and little or no sunlight, and (often) clear skies allowing strong radiational cooling What brakes the cooling? Thanks. David, where is the trick? A night in the dead of winter over there is the same as a day: without sun. 141. John Reid I would like to resurrect this thread if that is possible because I believe that everyone has missed the point. We were discussing statistical inference. Statistical inference involves hypothesis testing. Hypothesis testing involves setting up a Null Hypothesis. Discussions such as the one about whether we should or should not show confidence limits on graphs can often be resolved by asking the question “What is the underlying null hypothesis?”. Indeed is there a null hypothesis underlying Paul Linsay’s claim that the sample data are a “good” fit to a Poisson distribution? I will now set up a null hypothesis for dealing with Paul’s proposition about the hurricane data. My null hypothesis is the following statement: “The annual hurricane counts from 1945 to 2004 are sampled from a population with a Poisson distribution and the hurricane count of 15 for the year 2005 is a sample from that same population.” The mean count for the 60 years 1945 to 2004 inclusive is 5.97. We will use this as an estimate of the parameter of the distribution. I have calculated that the probability of obtaining a count of 15 or greater from a Poisson distributed population with a parameter of 5.97 is .0005, ie 1 in 2000. We can therefore reject the null hypothesis at the 0.1 percent level. It follows that either 2005 is an exceptional year which is significantly different from the 60 preceding years or that the process which generates annual hurricane counts is not a Poisson distribution. Personally I prefer that latter interpretation. Hurricane generation is likely to depend on large scale ocean parameters such as mixed layer depth and temperature which persist over time. Because of this it is unlikely that successive hurricanes are independent events. If they are not independent then they are not the outcome of a Poisson process. Poisson works only if there is no clustering of events. A back-of-the-envelope calculation indicates that 15 is rather a large sample value. With a mean of about 6 and a standard deviation of about 2.5, 15 is more than 3 standard deviations away from the mean. It will certainly have a low probability. Ironically Paul Linsay’s data examined in this way leads to a conclusion which is diametrically opposed to his original intention in presenting the data. All the same it was a great idea and one certainly worth discussing on Climate Audit. Thanks Paul. JR 142. Pat Frank It seems to me, accepting your figures, that your 1 chance in 2000 is the statistic that gives the chance of having 15 hurricanes in any one year. However, there are 60 years in 1945-2004, and so your 1 in 2000 becomes 60 in 2000 for any set of sixty years. From your calculation, there’s a 3 percent chance that in any 60 year period, one year will have 15 hurricanes. From Paul’s figure, we see one 15-hurricane year. So, your null hypothesis is rejectable at the 3 percent level. Not very significant. 143. bruce Re #141, 142: And for us lay folk, the conclusion is?? 144. Paul Linsay #141, 142: Pat’s analysis is the correct one. In my original calculation I got a mean of 6.1 hurricanes per year. This gives a probability of 1.0e-3 of observing 15 hurricanes in any one year. In 63 years the probability is 6.5e-2 of observing at least one 15 hurricane year. Next point: When should a rare event generated by a stochastic process occur? Only when you’re not looking? Only if you’ve taken a very long time series? The correct answer: They happen at random. It’s hard to build up an intuition for these kinds of probabilities. In my youth, I spent many sleepy nights on midnight shift at Fermilab watching counter distributions build up. They always looked strange when there were only a few tens to hundreds of events. It takes many thousands of events to make the distributions look like the classic Poisson distribution. We just don’t have that kind of data for hurricane counts. There is another very strong piece of evidence for the Poisson nature of hurricane counts that is not shown on this thread but is shown in the continuation thread. If you scroll down to Figure 5 you will see a plot of the distribution of times between hurricanes. Hurricanes occur at random but the time between them follows an exponential distribution, which is the classic signature of a Poisson process. The same distribution occurs if the data is restricted to 1944-2006, and within errors, with the same time constant. 145. John Reid Pat Frank, you say However, there are 60 years in 1945-2004, and so your 1 in 2000 becomes 60 in 2000 for any set of sixty years. but my null hypothesis was not about ANY year of the 60 years it was specifically about THE year 2005. The hypothesis that the high count in 2005 arose purely by chance can be rejected at the 0.1 percent level as I stated. It might be more appropriate to criticize my choice of a specific year in my null hypothesis. I did so because it is a recent year. We are looking for a change in the pattern. The subtext of all this is that the Warmers are saying that global warming is causing more cyclones and Paul is saying “No it’s just chance”. When, after 60 years of about 6 cyclones a year, we suddenly get a year with 15 cyclones is that due to chance? I have shown that it is not. It is too improbable. It is highly significant and we need to look for another explanation. With regard to Figure 5 on the other thread – the real issue is whether the displayed sample is significantly different from the exponential law expected for a Poisson distribution, not whether the graph looks good. To test this you would need to use chi-square or Kolmogorov-Smirnov as suggested by RichardT on the other thread. Quantitative statistical methods provide the best way of extracting the maximum amount of information from a limited amount of data. Qualitative methods like eyeballing a graph really don’t tell us very much, it is too easy to fool yourself. Even though 2005 had a significantly greater number of cyclones I do not believe that this supports AGW. All it does is imply that a mechanism exists for generating an abnormal number of cyclones in some years. The modest count of 5 in 2006 suggests that 2005 was one-off rather than a trend. A one in two thousand year probability and you don’t think that is significant. JR 146. Willis Eschenbach John Reid, I don’t understand the logic. If you picked a year at random, it might be true, but if you pick just the highest year out of the bunch, you have to look at the odds of that turning up in a sample of N=60, not N=1. w. 147. Posted Feb 8, 2007 at 4:38 AM | Permalink | Reply Right, if the chance of a given year having x hurricanes is 1 in 2000, then the chance of at least one year out of 60 having x hurricanes is 60 in 2000. So, while it’s highly unlikely that 2005 be that year, it’s not as quite unlikely that that year should be between 1941 and 2000 (for example). 148. Ken Fritsch I essentially agree with Paul Linsay’s comments, i.e. that the counts are best fit with Poisson distributions and that the year 2005 was a very unusual year. I also think that since a better fit is derived for a Poisson distribution by dividing the time period and we have some a priori doubts about the early counts that the results from splitting the data would agree with an undercount (a random one that is). The error bars have nothing whatsoever to do with these arguments, but error bars representing the mean (square root of it) for the entire period would be more appropriate. Thought I would sneak that in. My background on using the square root of the mean for an individual count to indicate statistical error is appropriate if I were counting a radioactive decay that I knew would yeild a Poisson distribution and I make only 1 count. If I made multiple counts I would average them and use the resulting mean to calculate the statistical error. Paul, I reside close to where you spent your midnight shifts. 149. Pat Frank #145 — John, thinking statistically is an exercize in counter-intuitive realizations. If you choose any event and ask after the probability that it would happen just when it did, that number will be extremely small. Does that mean no events at all will ever happen? With regard to your admirable calculation, you can choose 2005 to be your year of study, but the statistics of your system include 60 years and not just one year, because you applied it to Paul Linsay’s entire data set. That means a 15-hurricane year has a 3 percent chance of appearing somewhere in that 60 years. The fact that it appeared in 2005 is an unpredictable event and has the tiny chance you calculated. But that same tiny chance would apply to every single year in the entire 2000 years of record. And in that 2000 year span, we know from your calculation that the probability of one 15-hurricane year is 1 (100%). Even though the chance of it appearing in any given year is 0.0005. So, how does one reconcile the tiny 0.0005 chance in each and every year with a probability of 1 that the event will occur within the same time-frame? One applies statistics, as Paul did, and one shows that the overall behavior of the system is consistent with a random process. And so the appearance of unlikely events can be anticipated, even though they cannot be predicted. You wrote: “When, after 60 years of about 6 cyclones a year, we suddenly get a year with 15 cyclones is that due to chance?” Look at Paul’s original figure at the top of the thread: there was also a 12-hurricane year and two elevens. Earth wasn’t puttering along at a steady 6, and then suddenly jumped to 15, as you seem to imply. It was jumping all over the place. There was also a 2-hurricane year (1981). Isn’t that just as unusual? Does it impress you just as much as the 15-year? 150. richardT John, What you are doing is a post hoc test – finding an interesting event and showing that it unexpected under the null hypotheses. This type of test is very problematic, and typically has a huge Type-1 error. But you are correct that Paul Linsay’s analysis is incomplete. A goodness-of-fit test is required, AND a power test is required to check the Type-2 error rate. 151. John Reid Willis says: If you picked a year at random, it might be true, but if you pick just the highest year out of the bunch, you have to look at the odds of that turning up in a sample of N=60, not N=1. I didn’t pick it because it was the maximum, I picked it because it was recent. I am testing for change. It is a pity that this conversation didn’t happen a year ago when it would have been THE most recent year. I agree that the water is muddied slightly by it not being the most recent year. Ken Fritsch says: the year 2005 was a very unusual year Thank you Ken. So unusual in fact that a year like this should only occur once in 2000 years if the Poisson assumption is correct. I agree with what you say about error bars. Pat Frank says: And in that 2000 year span, we know from your calculation that the probability of one 15-hurricane year is 1 (100%). Where did you learn statistics? By your argument the probability of a 15-hurricane year in 4000 years would be 2. In fact the probability of a 15-hurricane year in 2000 years is 1 – (1-.0005)^2000 = 0.632 not 1. and: there was also a 12-hurricane year and two elevens. Earth wasn’t puttering along at a steady 6, and then suddenly jumped to 15, as you seem to imply. It was jumping all over the place….. Isn’t that just as unusual? Does it impress you just as much as the 15-year? The mean of the whole 62 year period is 6.1. I have calculated the Poisson probability of 11 or more hurricanes in a single year to be .0224. The probability of of 11-or-more-hurricane years in 62 years is therefore 0.75 The probability of 4 such events is .19. No it doesn’t impress me. The method I used whereby I partitioned the data into two samples, used the large sample to estimate the population parameter and then tested the other sample to see if it is a member of that same population, is a standard method in statistics. JR 152. Ken Fritsch Re: #150 But you are correct that Paul Linsay’s analysis is incomplete. A goodness-of-fit test is required, AND a power test is required to check the Type-2 error rate. I did a chi square test for a Poisson fit and a normal fit and the fit was significantly better for a Poisson distribution than for a normal one. The fit for the period 1945 to 2006 was excellent for a Poisson fit and good for the period 1851 to 1944 — using of course two different means and standard deviations. Re: #151 Ken Fritsch says: the year 2005 was a very unusual year Thank you Ken. So unusual in fact that a year like this should only occur once in 2000 years if the Poisson assumption is correct. If essentially all of the data fits a Poisson distribution and one year shows a very statistically significant deviation I would not necessarily be inclined to throw out the conclusion that the data fits a Poisson distribution reasonably well. A 1 in 2000 year occurrence has to happen in some 60 year span. The evidence says that the occurrence of hurricanes can be best approximated by a Poisson distribution, and with all the implications of that, but that does not mean that nature allows for a perfect fit as that is seldom the case. Also what if some of the hurricanes in 2005 were counted simply because of marginal wind velocity measurements? After all we are counting using a man made criteria and measurements. It is not exactly like we are measuring hard quantities in the realm of physics. 153. John Creighton #47 Whiles, I’ve heard of sinusoidal distributions but it never crossed my mind to combine it with a poison distribution. Great idea . Have you looked at the power spectrum of the hurricane’s. Perhaps you can identify a few spectral peaks. 154. John Reid Ken Fritsch says: A 1 in 2000 year occurrence has to happen in some 60 year span. Reminds me of my granny who used to say “well someone’s got to win it” whenever she bought a lottery ticket. and: The evidence says that the occurrence of hurricanes can be best approximated by a Poisson distribution Is this the evidence which is under discussion here or do you know some other evidence that you haven’t told us about? If not, aren’t you assuming what you are trying to prove? Go back to my original post and look at the null hypothesis which I set up. Because the computed probability was extremely low we must reject the null hypothesis. Okay? Therefore EITHER 2005 was a special year OR the underlying distribution is not Poisson distributed. It’s your choice. It appears you have chosen the former. I am happy with that. 2005 was a significantly different year from the preceding years. The next step is to find out why. Let’s use statistics as a research tool rather than a rhetorical trick. I am not arguing in favour of AGW. I am arguing against eyeballing graphs and in favour of using quantitative statistics. Paul Linsay picked a lousy data set with which to demonstrate his thesis. Its a pity he didn’t do it 2 years ago, it might have worked without the 2005 datum. JR 155. John Creighton I could really use a latex preview. Please delete the above post: I find this an intriguing discussion. I think John Ried has a good point. A poison or sinusoidal poison may be a good distribution to describe most of the process but may not describe tail events (clustering) well. John Ried also puts forth the other hypothesis that in recent years the number of hurricanes is increasing. So then why not use some kind of poison regression: Say: $\lambda (t)=t * \lambda _{1} + lambda _{o}$ Then : $Pr(Y_{i} = y_{1})= exp(- t * \lambda _{1} - \lambda _{o} )*(t* \lambda _{1} + \lambda_{o} ) ^{ y_{1}}/(y_{1}!)$ For the case of N independent events: $Pr(Y_{1}=y_{1}, . . . , Y_{M}=y_{M})= Pr(Y_{1}=y_{1}) * . . . * Pr(Y_{M} = y_{M})$ I think if you take the log of both sides and then find the maximum value by varying lambda_1 and lambda_o you will get the optimal value. Recall the maximum occurs where the derivative is equal to zero. It looks like you could reduce the problem to finding roots of a polynomial. Perhaps there are more numerically robust ways to handle the problem. That said using the roots of a polynomial could give an initialization to a gradient type optimization algorithm. 156. John Creighton Another thought, once you have found the optimal value for the slope of the mean in the poison distribution, one could do as Willis has done in post 47. But this time instead of plotting a sign poison distribution we plot a distribution which is a composition of linear distribution with a poison distribution. Given in reality the mean has a sinusoidal component for sure and maybe a linear component it should be pretty clear the class of distributions we should be looking at. We must remember all models are an approximation. The point is not to disprove a model but find the model that is the best balance between the fewest number of parameters and the greatest accuracy. 157. DeWitt Payne Isn’t the real question not whether the data approximate a Poisson distribution, but whether the mean of the distribution is constant or varies with time? The Cusum chart that I posted as comment #19 on the continuation of this thread clearly demonstrates the mean isn’t constant and that the mean has increased to about 8 since 1995. Hurricanes occur in small numbers each year. You can’t have less than zero hurricanes. Of course the distribution will appear to be approximately Poisson. How can it not be? 158. Bob K I found an informative primer on tropical cyclones. Global Guide to Tropical Cyclone Forecasting Here are a couple excerpts from chapter one related to Poisson distribution. Care is needed in the interpretation of these data. A frequency of 100 cyclones over 100 years indicates an average of 1 per year. This should not be interpreted as a 100% probability of a cyclone occurring on that date. Rather, use of the Poisson distribution (Xue and Neumann, 1984) indicates a 37% chance of no tropical cyclone occurring. This distribution provides an excellent estimate of occurrence probability for small numbers of cyclones in limited regions. If a long period of accurate record is available Neumann, et al. (1987) found that the use of relative frequencies provide a better estimate of event probability. A useful estimate of the number of years having discrete tropical cyclone occurrence in a particular area (the number of years to expect no cyclones, 1 cyclone, etc) may be obtained by use of the Poisson distribution. Discussion on this application is given by Xue and Neumann (1984). 159. John Creighton DeWitt Payne, The sinusoidal poison distribution is exactly that. It is a poison distribution where the mean changes with time. A sinusoidal poison distribution should be more tail heavy then the linearly increasing mean which was suggested with John Read. However, perhaps a modulated linearly increasing mean would be even more tail heavy. 160. Ken Fritsch Isn’t the real question not whether the data approximate a Poisson distribution, but whether the mean of the distribution is constant or varies with time? The Cusum chart that I posted as comment #19 on the continuation of this thread clearly demonstrates the mean isn’t constant and that the mean has increased to about 8 since 1995. Hurricanes occur in small numbers each year. I agree that the count data and Poisson distribution fitting is better approximated by a Poisson distribution that has a change in mean with time. I suspect that your Cusum chart is probably overly sensitive in picking up a statistically significant change in mean. To reiterate what I found for the time periods below for a mean, Xm, and the probability, p, of a fit to a Poisson distribution: 1851 to 2006: Xm = 5.25 and p = 0.087 1945 to 2006: Xm = 6.10 and p = 0.974 1851 to 1944: Xm = 4.69 and p = 0.416 Now the probability, p, for the period 1945 to 2006 shows an excellent fit to Poisson distribution, while 1851 to 1944 shows a good fit and 1851 to 2006 shows a poorer fit but not in the reject range of less than 0.05. As RichardT noted we need to look at Type II errors and those errors of course increase from very small for 1945 to 2006 and intermediate from 1851 to 1944 to large for 1851 to 2006 as evidenced by the values of p. My other exercise in this thread was to determine the sensitivity of the goodness of fit test to a changing mean and found that while the test does not reject a fit for excursions as large as 1 count per year from the mean, the value of p decreases significantly. If one had a small and/or slowing changing sinusoidal variation in mean shorter than the time periods measured (for a fit to a Poisson distribution) it is doubtful that the chi square test would detect it. I think one can make a very good case for the Poisson fit from 1945 to 2006 and a complementary reasonable case for a Poisson fit from 1851 to 1944 with a smaller mean. With the evidence for earlier under counts of TCs, the smaller early mean with a Poisson distribution could agree with that evidence — if one assumed the earlier TCs were missed randomly. Or one could, if a priori evidence was there, make a case for large period sinusoidal variations in TC occurrences. I am not sure how a case would be made for a slowly changing mean from a Poisson distribution as a function of increasing SSTs for the period 1945 to 2006, but I am sure that someone has or will make the effort. Small changes in the 1945 to 2006 mean due to under (or over) counting and/or trends due to small temperature changes and/or a small cyclical variation probably would not be detectable in the chi square goodness of fit test. Having said all that, the fit for that time period is excellent. 161. David Smith Minor note: the period 1945-2006 saw a drift upwards in storm count due to increased counting of weak, short-lived storms and of those hybrids called subtropical storms. If anyone desires to remove those, so as to give a more apples-to-apples comparison, then remove those that lasted 24 hours or less (at 35 knot or higher winds) and the storms which were labeled in the database as subtropical. 162. DeWitt Payne Re: #160 The cusum chart is designed to detect small changes in a process, in the range of 0.5 to 2.0 standard deviations, more rapidly than a standard individual control chart. So I don’t think it is overly sensitive considering that the changes observed are likely to be small. In fact, I’m rather surprised that hurricane researchers haven’t already used it. But then it is a technique used mostly in industry. If the annual count of hurricanes were truly random then annual hurricane predictions would have no skill compared to a prediction based on the median (or maybe the mode) of the distribution. This should be testable and probably already has been. Anybody have a quick link to the data before I go looking? 163. David Smith RE #162 Bill Gray’s individual hurricane forecasts, including his review of their forecast skill, can be found in the individual reports located here . I don’t know of any comprehensive study, though I seem to recall that Greg Holland did some kind of review (which CA’s willis later found to be of little merit). I also seem to recall that willis did a review on CA and found skill in Gray’s forecasts. 164. Steve McIntyre Solow and Moore 2000 which I mentioned in #1 above includes a test for whether there is a secular trend in the Poisson parameter (concluding for Atlantic hurricanes 1930-1998 that there isn’t.) If anyone can locate an implementation of this test in R (which seems to have every test under the sun), I’d be interested. IT seems to me that the strategy for testing the presence of a secular trend (see the null hypothesis H0 described there) would be equally applicable, mutatis mutandi, for testing the presence of a sin(b*t) term rather than just a t term. Andrew R. Solow and Laura Moore, Testing for a Trend in a Partially Incomplete Hurricane Record, Journal of Climate Volume 13, Issue 20 (October 2000) pp. 3696′€”3699 url 165. Ken Fritsch Re: #161 Minor note: the period 1945-2006 saw a drift upwards in storm count due to increased counting of weak, short-lived storms and of those hybrids called subtropical storms. I would agree that ideally as much of the observational differences as can be presumed should be removed from the data before looking at fits to Poisson distribution. Re: #162 I am not aware of a Cusum analysis being used to evaluate statistically significant changes in means, but have seen it used exclusively as an industrial control tool. Maybe a good reference to a statistical book or paper would convince me. Re: #163 I don’t know of any comprehensive study, though I seem to recall that Greg Holland did some kind of review (which CA’s willis later found to be of little merit). I also seem to recall that willis did a review on CA and found skill in Gray’s forecasts. I found that without the late adjustments (closer to event) that there was not skill in Gray’s forecasts. I also have the idea that he used some adjustments that where not necessarily part of any objective criteria but more subjective. I did find skill when late adjustments were used. Re: #164 Solow and Moore 2000 which I mentioned in #1 above includes a test for whether there is a secular trend in the Poisson parameter (concluding for Atlantic hurricanes 1930-1998 that there isn’t.) I need to read this link more closely but if they have looked at a fit of land falling hurricanes for fit to a Poisson distribution, I must say: why did not I think of that. 166. John Creighton John Ried (#141) writes: “The mean count for the 60 years 1945 to 2004 inclusive is 5.97. We will use this as an estimate of the parameter of the distribution. I have calculated that the probability of obtaining a count of 15 or greater from a Poisson distributed population with a parameter of 5.97 is .0005, ie 1 in 2000. We can therefore reject the null hypothesis at the 0.1 percent level. It follows that either 2005 is an exceptional year which is significantly different from the 60 preceding years or that the process which generates annual hurricane counts is not a Poisson distribution. Personally I prefer that latter interpretation. Hurricane generation is likely to depend on large scale ocean parameters such as mixed layer depth and temperature which persist over time. Because of this it is unlikely that successive hurricanes are independent events. If they are not independent then they are not the outcome of a Poisson process. Poisson works only if there is no clustering of events.” I was thinking about your comments and I’ve decided that if you are interesting in a good fit of the tail statistics then you should not estimate the mean via a simple average. You should use maximum likely hood to estimate the mean. This will mean that the fit you obtain for the distribution will have fewer of these highly unlikely events but will have a worse Chi Squared Score. 167. Pat Frank #151 — I never claimed to be an expert in statistics. Whatever I may be expert in, or not, doesn’t change that no matter when a 15-hurricane year showed up across however many years you like, your method of isolating out that particular year requires it to be highly improbable and demanding of a physical explanation. Your null experiment, in other words, telegraphs your conclusion. There is a physical explanation, of course, but in a multiply-coupled chaotic system a resonance spike like a 15-hurricane year will be a fortuitous additive beat from the combination of who-knows-how-many underlying energetic cycles. It’s likely no one will ever know what the specific underlying physical cause is for the appearance of any particular number of hurricanes. There is another aspect of this which is overlooked. That is, in a short data set like the above, there won’t have been time for the appearance of very many of the more extreme events. That means the calculated mean of what amounts to a truncated data series is really a lower limit of the true mean. A Poisson distribution calculated around that lower limit will leave isolated whatever extremes have occurred, because the high-value tail will attenuate too quickly. For example, the Poisson probability of 15 hurricanes in a given year increases by factors of 1.7 and 3.2 over mean=6.1 if the true mean is 6.5 or 7 hurricanes per year, resp. That 6.1 per year is a lower limit of the true mean then makes 15 hurricanes less unusual, and so perhaps less demanding of an explanation that, in any case, would probably be unknowable even if we had a physically perfect climate model.* *E.g. M. Collins (2002) Climate predictability on interannual to decadal time scales: the initial value problem Climate Dynamics 19, 671′€”692 168. bender Re #164 Good question. 169. Steve McIntyre #141. I agree with John Reid. Poisson is only a hypothesis. Some sort of autocorrelation certainly seems possible to me, especially once the year gets started. My guess as to a low 2006 season was based on the idea that it had a slow start and whatever conditions favored the slow start would apply through the season. Also, for all we know, the true distribution may be a somewhat fat-tailed variation of the Poisson distribution. I doubt that it would be possible to tell from the present data. 170. Steve McIntyre #141. John Reid, wouldn’t it make more sense to test the hurricane distribution for 1945-2006 as Poisson rather than calculating a parameter for 1945-2004 and then testing 2005. A test for Poisson is that the Poisson deviance is approximately chi-squared with degrees of freedom equal to the length of the record. HEre’s a practical reference. Here’s an implementation in of this test in R: index< -(1945:2006)-1850;N<-length(index); N #62 x<- hurricane.count[index]#hurricane count is series commencing 1851 glm0<-glm(x~1,family=poisson) x_hat<-exp(coef(glm0)) ;x_hat # 6.145161 test<-2*sum( x * log(x/ x_hat) );test #Poisson test asymptotically chi2 df=N # 61.63861 pchisq(test,df=N)#0.5109402 The value of x_hat here is no surprise as it is very close to the mean. Including the 2005 and 2006 records, the Poisson deviance is almost exactly equal to the degrees of freedom. 171. Ken Fritsch Re: #170 I was concerned that the numbers in your post did not match the chi square test I used to obtain a p = 0.97 for a fit of the 1945-2006 hurricane counts to Poisson distribution. I then read the paper linked and believe that I see the approach used there is much different than the one I used. (I excerpted it below and ask whether I am correct that this is what you used — without the a priori information incorporated). The p = 0.51 for a Poisson fit that I believe I can deduce from your print outs, while indicating a good fit, is significantly below that that I calculated using the approach with which I am familiar. I am wondering whether the binning involved in my approach that requires at least 5 counts per bin is what makes the difference here. The df in my approach are of course related to the counts ‘€” 2 which in this case could not exceed 13 but is made smaller by the binning of more than one number to meet the 5 minimum requirement. In binning the 15 count for 2005 with other lesser counts to meet the 5 minimum binning requirement, in effect takes a very low probability appearance of 1 occurence at 15 counts and combines it into one where the probabilty goes to a probability of 5 counts over 11 or 12 which will have a higher probability. An alternative method of interpreting the model deviance is to estimate what the deviance value should be for a sparse data set if the model fitted the data well and this is possible using simulations of the data. The fitted values which are derived from the original Poisson model may be regarded as the means of a set of Poisson random variables and, assuming that these fitted values are correct, random numbers for each observation can be generated and compared with the Poisson cumulative distribution function to provide simulated data. A new set of fitted values may then be estimated by fitting a Poisson model to these simulated data and the deviance of this new model may be calculated by comparing the simulated data, which are now treated as the observed values, with the new set of fitted values. Because the data have been produced according to a known model, the deviance is approximately what we would expect if a correct model were fitted to the original, sparse data set. If the observed model deviance lies within the middle 95% of the simulated distribution, it is reasonable to accept the model at the 0.05 significance level. The application of this approach to the sparse data set analysed in this study is described below. A priori information may also be included in generalized linear models… Such a priori information is incorporated into the model by treating it as a covariate with a known parameter value of one.. 172. Ken Fritsch Does anyone have the land falling hurricane data and/or link to it for the NATL over the extended period of 1851 to 2006 — or a shorter period such as 1930 to 2006? I would like to see how well that data fits a Poisson distribution over the whole period and perhaps for split periods. 173. Steve McIntyre I collated the landfall data into a readable form. The HURDAT data is formated in a very annoying way. TRY: landfall=read.table(“http://data.climateaudit.org/data/hurricane/hurdat/landfall.hurdat.csv”,header=TRUE,sep=”\t”) landfall.count= ts( tapply(!is.na(landfall\$year),factor(landfall\$year,levels=1851:2006),sum) ,start=1851) landfall.count[is.na(landfall.count)]=0 174. richardT Here is some code that can be used to give an appreciation of the type-2 errors in Paul Lindsay’s analysis. This simple analysis tests if Ho (the process is Poisson) is rejected for mixtures of Poisson processes using a chi-squared test. The mixture has two populations of ~equal size, with means 6.1+offset and 6.1-offset, where the offset is varied between 0 and 3. This is repeated many times, and the proportion of types Ho is rejected is summed. ```library(vcd) off.set``` ``` ``` ```When the offset is zero, Ho is rejected 5% of the time at p=0.05, this is expected. The test has almost no power with small offsets, the proportion of rejects is only slightly higher than the type-1 error rate. With an offset of 2, i.e. population means of 4.1 and 8.1, Ho is rejected about half the time. Still larger offsets are required to reliably reject Ho. This case is extreme, using two discrete populations rather than a continuity, but still Ho is not reliably rejected unless the difference between populations is large - Paul Lindsay's test has little power. An alternative test, GLM, is much more powerful, and can be used to shows that the hurricane counts are not Poisson distributed.``` 175. richardT Code again ```library(vcd) off.set=seq(0,3,.25)#offset N=100#increase to 1000 for more precision nyr=63 #number of years in record rejectHo``` 176. richardT last try ```library(vcd) off.set=seq(0,3,.25)#offset N=100#increase to 1000 for more precision nyr=63 #number of years in record rejectHo=sapply(off.set,function(off){ mean(replicate(N,{ x=rpois(nyr,6.1+c(-off,off)) summary(goodfit(x,"poisson","MinChisq"))[3]<0.05 })) }) plot(off.set,rejectHo, xlab="Offset", ylab="Probability of rejecting Ho") rbind(off.set,rejectHo) # [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] #off.set 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3 #rejectHo 0.05 0.07 0.09 0.09 0.17 0.20 0.44 0.49 0.63 0.83 0.97 0.98 1 ``` 177. Ken Fritsch Re: #174 I found that for the chi square goodness of fit to a Poisson distribution from 1945 to 2006 went from 0.974 with a mean of 6.1 to a p 178. Ken Fritsch Re: #174 I found that for the chi square goodness of fit to a Poisson distribution for 1945 to 2006 hurricane counts went from 0.974 with a mean of 6.1 to a p 179. Ken Fritsch Re: #174 I found that for the chi square goodness of fit to a Poisson distribution for the 1945 to 2006 hurricane counts went from p = 0.974 with a mean of 6.1 to a p less than 0.05 when I inserted means of 5.0 and 7.2. The value of p decreases slowly for the initial incremental changes from 6.1 and then decreases at an ever increasing rate as the changes get further from 6.1. Do you have more details on the alternative GLM test? I now remember that the greater or lesser than signs will stop the post. 180. John Reid Steve McIntyre says (#170) #141. John Reid, wouldn’t it make more sense to test the hurricane distribution for 1945-2006 as Poisson rather than calculating a parameter for 1945-2004 and then testing 2005. Yes it would. I only did it the way I did it to allow me to make the either/or argument more clearly. As it happens it may well be that the first 60 years is not significantly different from a Poisson distribution. I’ll have a look at it. JR 181. Steve McIntyre I tried the following test for a trend in the Poisson parameter (calculating glm0 as above). The trend coeffficient was not significant. glm1=update(glm0,formula=x~1+index) summary(glm1) ## # Deviance Residuals: # Min 1Q Median 3Q Max #-1.98996 -0.88819 -0.03531 0.52119 2.73641 #Coefficients: # Estimate Std. Error z value Pr(>|z|) #(Intercept) 1.416262 0.366138 3.868 0.000110 *** #index 0.003170 0.002866 1.106 0.268673 #— #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ‘ 1 #(Dispersion parameter for poisson family taken to be 1) # Null deviance: 61.639 on 61 degrees of freedom #Residual deviance: 60.414 on 60 degrees of freedom #AIC: 287.76 #Number of Fisher Scoring iterations: 4 anova(glm1,glm0) #Model 1: x ~ index #Model 2: x ~ 1 # Resid. Df Resid. Dev Df Deviance #1 60 60.414 #2 61 61.639 -1 -1.225 182. richardT #181 try a second order term in the GLM model, else use a GAM 183. Ken Fritsch Doing my standard chi square test for goodness of fit for a Poisson distribution for land falling hurricanes for the time periods 1851 to 2005, 1945 to 2005 and 1851 to 1944, I found the following means, Xm, and chi square probabilities, p: 1851 to 2005: Xm = 1.81 and p = 0.41 1945 to 2005: Xm = 1.70 and p = 0.03 1851 to 1944: Xm = 1.88 and p = 0.58 The trend line for land falling hurricane counts over the 1851 to 2005 time period has y = -0.0016x + 4.93 and R^2 = 0.0025. 184. Ken Fritsch Re: #174 This case is extreme, using two discrete populations rather than a continuity, but still Ho is not reliably rejected unless the difference between populations is large – Paul Lindsay’s test has little power. An alternative test, GLM, is much more powerful, and can be used to shows that the hurricane counts are not Poisson distributed. RichardT, I am not sure how to interpret your findings, but I would say that the p values that you and I derived for the 1945 to 2006 hurricane count fit to a Poisson distribution are close to same: 0.95 and 0.97. It is that number that informs of the fit for that time period and gives the measure of Type II errors. Those numbers indicate a small Type II error. Your sensitivity test is something that shows considerably less robustness for detecting changes in means than my less than formal back-of-an-envelop test did, but that exercise is besides the point as it does not change the p value for the actual fit found. I know that chi square goodness of fit tests can be less than robust and more sensitive tests where applicable should be applied. For a goodness of fit for normal distribution, I was shown years ago that skewness and kurtosis tests could be superior to the chi square test and particularly when the data is sparse and binning of data becomes problematic. I do not know how to interpret the difference in the goodness of fit tests between that for all hurricanes and land falling hurricanes for the 1945 to 2005(6) time period except to note that the sparse data for a small Poisson mean reduces the degrees of freedom to very small numbers for a chi square test. The discrepancies between a predicted Poisson and the actual distribution for land fall hurricanes were in the middle of the range and not at the tails. The telling analysis to me are the lack of trends in the land falling hurricanes and in the partitioned data that Steve M and David Smith have presented and analyzed ‘€” all of which point to some early undercounts and (lacking better explanations for these findings than I have seen) an immeasurable trend in total hurricanes. I am hoping to see more details from you or Steve M on the Generalized Linear Models alternative test as I do not have much experience with fitting these models with a Poisson distribution (and ??). 185. Dan Hughes Speaking of counting things, would the same type of analysis apply to counting days of record-setting high and low temperatures as described here? 186. John Creighton #186 Dan, I think a similar type of analysis would work but record high temperatures aren’t really a poison process. If you find the average distance from the mean temperature at which recorded high temperatures occur and instead count the number of days which the temperatures deviates from the mean by this amount or more you would have a poison process. This could match closely the counts of record high temperatures but will not equal it exactly. 187. Posted Aug 24, 2007 at 10:56 AM | Permalink | Reply NAMED STORMS AND SOLAR FLARING Some meteorologists have observed that the key circumstances that factor into hurricane formation are primarily sea surface temperatures, wind shear, and global wind events. They also believe that stationary high pressure centers over North America and El Nino cycles in the Pacific cause the Atlantic Hurricanes to turn north into the Mid-Atlantic. Also there appear to be fewer hurricanes during El Nino years more recently While all this may be true in terms of outer symptoms, it does not explain the inner causes of hurricanes nor do they help to predict future number of named storms, a process which more recently has not been accurate. Standard meteorology does not yet embrace the electrical nature of our weather, plasma physics, nor the plasma electrical discharge from near earth comets. Solar cycles and major solar storm events like X flares are mistakenly ignored. Yet solar flares disrupt the electrical fields of our ionosphere and atmosphere and cause electrical energy to flow between our ionosphere and upper cloud tops in developing storms. Here is what Space Weather recently said and recorded when showing an electrical connection from the ionosphere to the top of storm clouds on August 23,2007. GIGANTIC JETS: Think of them as sprites on steroids: Gigantic Jets are lightning-like discharges that spring from the top of thunderstorms, reaching all the way from the thunderhead to the ionosphere 50+ miles overhead. They’re enormous and powerful. You’ve never seen one? “Gigantic Jets are very rare,” explains atmospheric scientist and Jet-expert Oscar van der Velde of the Université Paul Sabatier’s Laboratoire d’Aérologie in Toulouse, France. “The first one was discovered in 2001 by Dr. Victor Pasko in Puerto Rico. Since then fewer than 30 jets have been recorded–mostly over open ocean and on only two occasions over land.” The resulting increased electrical currents affect the jet streams [which are also electrical] and which energize and drive our developing storms and hurricanes. Here is what NASA said about the recent large X-20 solar flare on April 3, 2001[release 01-66] “This explosion was estimated as an X-20 flare, and was as strong as the record X-20 flare on August 16, 1989, ” said Dr. Paal Brekke, the European Space Agency Deputy Project Scientist for the Solar and Heliospheric Observatory (SOHO), one of a fleet of spacecraft monitoring solar activity and its effects on the Earth. “It was more powerful that the famous March 6, 1989 flare which was related to the disruption of the power grids in Canada.” Canada had record high temperatures that summer [This writers comments. not by NASA] Monday’s flare and the August 1989 flare are the most powerful recorded since regular X-ray data became available in 1976. Solar flares, among the solar system’s mightiest eruptions, are tremendous explosions in the atmosphere of the Sun capable of releasing as much energy as a billion megatons of TNT. Caused by the sudden release of magnetic energy, in just a few seconds flares can accelerate solar particles to very high velocities, almost to the speed of light, and heat solar material to tens of millions of degrees. The flare erupted at 4:51 p.m. EDT Monday, and produced an R4 radio blackout on the sunlit side of the Earth. An R4 blackout, rated by the NOAA SEC, is second to the most severe R5 classification. The classification measures the disruption in radio communications. X-ray and ultraviolet light from the flare changed the structure of the Earth’s electrically charged upper atmosphere (ionosphere). This affected radio communication frequencies that either pass through the ionosphere to satellites or are reflected by it to traverse the globe. [Note red highlighting is by this author ,not NASA] Here is what flares affect . Industries on the ground can be adversely affected, including electrical power generation facilities, ionospheric radio communications, satellite communications, cellular phone networks, sensitive fabrication industries, plus the electrical system of our entire planet including equatorial jet streams, storm clouds, hurricanes, ionosphere, northern and southern jet streams, earth’s atmosphere, vertical electrical fields between earth’s surface and the ionosphere just to mention a few. The reason for all the extra named storms recently 2000-2005 is not global warming but the increased number of significant solar flares, comet fly bye’s and the unique planetary alignment during the latter part of solar cycle #23. These events can occur any time during a solar cycle but are more prominent around the years of the solar maximum and especially during 6-7 years of the ramp down from maximum to minimum. Refer to the web page of CELESTIAL DELIGHTS by Francis Reddy http://celestialdelights.info/sol/XCHART.GIF for an excellent article and illustration of solar flares and solar cycles during the last three solar cycles. The use of simple Regression analysis of past named storms to predict future storms will continue to be of limited value unless these randomly occurring solar events are taken into account as well .One cannot accurately predict the score of future ball games by simply looking at past ball games. You have look at each new year based on the unique circumstances of that new season The attached table clearly illustrates why there were so few storms [only10] in 2006 and why the previous years 1998-2005 was so much more active in terms of named storms namely [16-28 storms/year] .The table for example shows that during 2003 there were 16 named storms and twenty [20] X class solar flares during the main hurricane season of June1-November 30. Three of the solar flares were the very large ones like X28, X17 and X10. On the other hand during 2006 there were only 10 named storms and only 4 X size solar flares of which none were during the hurricane season. During 2005 and 2003 there were 100 and 162 respectively of M size solar flares while in 2006 there were only 10. The 2000-2005 increase of named storms was not due to global warming or the years 2006-2007 would have continued to be high in terms of storms. During the period 2000-2005, much more electrical energy was pumped into our atmosphere by the solar flares especially the larger X size flares. There may have also been planetary electrical field increase brought on by the close passing of several major comets and special planetary alignments, like during September 6,1999 and August 26-29,2003. The year 2007 will likely be similar to 2006 with fewer storms as there has been no major solar flaring to date or major passing comets. It is possible but unlikely that major solar flaring will take place during a solar minimum year which the year 2007 is. Unless there will be significantly more solar flaring during the latter part of this year, the number of named storms will again be closer to the average of 9- 10 and not 15-17 as originally predicted nor the current predictions of some 13 -15 storms. YEAR # OF X SIZE DURINGG EL NINO # OF NAMED SOLARR COMETS SOLAR LARGE HURRIC. YEAR STORMS PHASE Near FLARESS FLARES SEASON adjustedd not adjust. 1996 1 1 13 12 solar min HALE BOOP 1997 3 X9.4 3 YES 8 7 1998 14 10 NA 15 14 1999 4 4 13 12 PL LEE 2000 17 X5.7 13 16 15 solar max ENCKE 2001 18 X20,X14.4 8 16 15 PL[six] C-LINEAR 2001A2B 2002 11 9 YES 13 12 2003 20 X28,17,10 15 16 16 PL NEAT V1 2004 12 11 YES 15 15 2005 18 X17 12 NA 28 28 2006 4 X9 0 YES 10 10 2007 0 0 NA 5 5 solar min to date to date 0 to date to date * assumed season June1 to Nov-30 C&M flares were not included Some flares last longer and deposit more energy. This was not noted. NA EL NINO present but not during hurricane season Very minor EL NINO months at the beginning of year PL Special planetary alignment during hurricane season Since major solar flares are difficult to predict, one can recognize in what phase of the solar cycle one is predicting into and use that as an indicator of possible below average, average or above average solar storm level which in turn translates to below average, average or above average named storms. See paper by T.Bai called PERIODICITIES IN FLARE OCCURRENCE ,ANALYSIS OF CYCLES 19-23 on [email protected] Above average flares occur during 6-7 solar ramp down period and to a lesser extent, the 3-4 years around the solar maximum. Average and below average flares occur at solar minimum and the 2-3 of the solar build up leading to solar maximum. Specific planetary alignments and the swing of major comets around our sun will also tend to increase the named storm activity. There are exceptions to every rule and sometime things are different from the normal or the past. For more information about the new science of weather and the electrical nature of our planet and our planet’s atmosphere refer to the writings of James McCanney and his latest book PRINCIPIA METEROROLOGIA – THE PHYSICS OF THE SUN 188. Frank Upton Months late, I know, but I just wanted to suggest that an time-variable observation bias is quite likely in hurricane detection. A hurricane is defined as a storm in which sustained windspeeds of more than a certain speed are found, at some point in its career. In the past, windspeeds could mostly only be measured accurately on land. Now, windspeeds can be measured remotely by radar. It is therefore more likely now that a storm which qualifies as a hurricane will be identified as such as than it was, say, 60 years ago. Or have I, too, missed something? One Trackback 1. By Numberwatch by John Brignell » DDD on Feb 24, 2007 at 11:00 AM [...] to that number 15, there seems to have been an outbreak of Data Deficiency Disorder over at Climate Audit. When you are stuck with a limited number of data, it is tempting to try all sorts of a posteriori [...] • Tip Jar (The Tip Jar is working again, via a temporary location)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943719744682312, "perplexity_flag": "middle"}
http://mathhelpforum.com/number-theory/200434-perfect-square-s-odd-number-factors.html
2Thanks • 1 Post By emakarov • 1 Post By awkward # Thread: 1. ## Perfect square s, odd number of factors If a number is a perfect square, it will have an odd number of factors (e.g., 4 has factors 1, 2, 4), whereas all other numbers have an even number of factors. Is the converse true? Please explain why? 2. ## Re: Perfect square s, odd number of factors By converse, do you mean the following statement: "If a number has an odd number of factors, then it is a perfect square, and if a number has an even number of factors, then is it not a perfect square"? The original statement says that being a perfect square is equivalent to having an odd number of factors, so of course the converse is true. 3. ## Re: Perfect square s, odd number of factors Yes, that's what I meant. But I am not convinced of the equivalence. Can you explain a bit further with the proof. 4. ## Re: Perfect square s, odd number of factors Suppose that the number of factors of n is odd. We need to show that n is a perfect square. Suppose the contrary; then by the statement in post #1, n has an even number of factors, a contradiction. In general, if you showed A implies B and (not A) implies (not B), then you showed that A and B are equivalent, so either one implies the other. Also, (not A) and (not B) are equivalent in this case. 5. ## Re: Perfect square s, odd number of factors No, I meant if it's just given that, "If a number is a perfect square, it will have an odd number of factors (e.g., 4 has factors 1, 2, 4)" Now how would you prove the converse? 6. ## Re: Perfect square s, odd number of factors So, I understand that the part in post #1 after "whereas" is not given. Without loss of generality, $n = p_1^{a_1}\cdot\ldots\cdot p_r^{a_r}$ for primes $p_1,\dots,p_r$. The number of factors of n is given by the divisor function $d(n)=(a_1+1)\cdot\ldots\cdot(a_r+1)$. If d(n) is odd, then all $a_i$'s are even, so $n = (p_1^{a_1/2}\cdot\ldots\cdot p_r^{a_r/2})^2$. 7. ## Re: Perfect square s, odd number of factors Much obliged. 8. ## Re: Perfect square s, odd number of factors Emakarov has given a first-rate proof, but I can't resist supplying a more basic proof that does not require knowledge of the properties of the divisor function. Theorem: If n is not a square, then it has an even number of divisors. Proof: Suppose x is a divisor of n, i.e. there is an integer y such that xy = n. Then y is also a divisor of n. (Here is the critical step.) Since n is not a square, x is not equal to y. So we have established a one-to-one correspondence between pairs of divisors of n, hence the number of divisors must be even. Corollary (the contrapositive): If n has an odd number of divisors, then it is a square. 9. ## Re: Perfect square s, odd number of factors Thanks, both of you guys!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.911346971988678, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/184111/how-to-show-that-e-g-cosz-is-analytic-using-cauchy-riemann-differential-e/184112
# How to show that e.g. $\cos(z)$ is analytic using Cauchy- Riemann differential equations? How to show that e.g. $\cos(z)$ is analytic using Cauchy-Riemann differential equations [$u_x(x,y)=v_y(x,y)$ and $u_y(x,y)=-v_x(x,y)$]? Do all analytic functions satisfy Cauchy-Riemann differential equations (CRDE)? What is the relationship between analyticity of complex functions and Cauchy-Riemann differential equations? I know that holomorphic (analytic?) functions satisfy CRDE, but are functions that satisfy CRDE always analytic (holomorphic)? - ## 4 Answers Start by rewriting: if $z$ is complex, then let $z=x+iy$. Then we have the function $\cos(x+iy)$. Now you can expand that with the rule $\cos(A+B)=\cos(A)\cos(B)-\sin(A)\sin(B)$. (Because you'll be left with bits like e.g. $\cos(iy)$, you'll may want to replace these with hyperbolic functions with e.g. $\cos(ip)=\cosh(p)$ and a similar relationship for $\sin$.) You'll be left with some complex function which we'll call $u+vi$ - i.e. let $u$ be the real part and $v$ the imaginary part. It is these $u$ and $v$ you are differentiating in the Cauchy-Riemann equations. - So if function is analytic( ~ holomorphic) in $\Omega \subset C$, then it satisfies C-R equations. And if f satisfies C-R equations and the functions $u(x,y)$ and $v(x,y)$ have first partial derivatives which are continuous, then $f$ is analytic(~holomorphic). Am I right? Then what about relationship between real differentiability and complex differentiability? Does they need C-R equations to distinguish them from each other? I mean that the relationship between those properties in case of complex function are defined by C-R equations . – alvoutila Aug 23 '12 at 20:48 I will address your more general questions first. I'm restricting discussion to complex functions of one complex variable, of course. Holomorphic and analytic functions are the same thing. A full proof is given in any complex analysis book, but I will give the outline. Assume all functions are defined in an open connected domain. Call functions with power series expansions at every point in their domain analytic, and call functions that complex-diferentiable holomorphic. If a function is analytic, we can expand it as a power series at every point, and elementary theory of power series shows they are complex-differentiable, so all analytic functions are holomorphic. To show all holomorphic functions are analytic, one uses a result called Cauchy's Integral Theorem to explicitly produce the required power series expansions at every point. It is easily proved (see any book) that all holomorphic (complex-differentiable) functions satisfy the C-R equations, even without showing that holomorphic and analytic functions are the same. However, not all functions satisfying the Cauchy-Riemann equations are analytic (holomorphic). The standard additional condition is to require is that the function have continuous first partial derivatives (when considered as a function $\mathbb{R}^2 \rightarrow \mathbb{R}^2$ ) in addition to satisfying the C-R equations. Now, showing $\cos(z)$ is analytic requires knowing how you defined it. I like to define it in terms of exponentials or power series, and in either case, analyticity is trivial. However, if you are restricted to the fact that $\cos(z)$ is just some function that satisfies the standard trigonometric properties, then I would go with the approach of Erik Pan, also posted in the answers section. Briefly: rewrite $\cos(x+iy)$ using the cosine addition formula, rewrite the result as $u+iv$ using hyperbolic functions, where $u$ and $v$ are real-valued, and then verify the C-R equations by differentiating directly. Remember to check that the first partial derivatives are continuous. It sounds like you would benefit greatly from a good complex analysis book. A quick treatment (that covers your questions in greater detail than this answer) is available for free here. - If $f(z)$ is a complex-valued function, we can write it as $f(x+iy) = u(x,y) + iv(x,y)$, where $u$ and $v$ are real-valued functions. Like you said, if $f$ is holomorphic, then the Cauchy-Riemann equations ($u_x = v_y$ and $u_y = -v_x$) are satisfied. However, the converse is not true. For example, if we let $f(x+iy) = \sqrt{|xy|}$, then $f$ satisfies the Cauchy-Riemann equations at $x=y=0$ but is not holomorphic there. Thus, having $u_x = v_y$ and $u_y = -v_x$ at some point $z$ is not enough to conclude that $f$ is holomorphic there. However, if we add that $u$ and $v$ have continuous partial derivatives at $z$, then we can conclude that $f$ is holomorphic at $z$. (Alternatively, we could add that the mapping $(x,y) \mapsto (u(x,y), v(x,y))$ is differentiable as a $\mathbb{R}^2 \to \mathbb{R}^2$ function to conclude that $f$ is holomorphic at $x+iy$. This is a stronger result.) In the case of $f(z) = \cos z$, we have $u(x,y) = \cos x\cosh y$ and $v(x,y) = -\sin x\sinh y$. We can check that the Cauchy Riemann equations hold everywhere, and furthermore, that all four partial derivatives are continuous everywhere. This is enough to conclude that $f$ is holomorphic. - Writing $$z=x+iy\,\,,\,\,x,y\in\Bbb R\Longrightarrow \cos z=\frac{e^{iz}+e^{-iz}}{2}=\frac{\cos x(e^y+e^{-y})-i\sin x(e^y-e^{-y})}{2}=$$ $$\cos x\cosh y-i\sin x\sinh y=u+iv$$ And now you can check the Cauchy-Riemann equations directly, for example: $$u_x=-\sin x\cosh y=v_y\,\,,\,etc.$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944400429725647, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/3805/encryption-scheme-with-equivalent-keys/3809
# Encryption scheme with equivalent keys? I've long been looking for a symmetric encryption scheme (or algorithm) with equivalent keys. Let me define what I want: 1. Symmetric encryption algorithm with encryption function $E_k$ and inverse decryption function $D_k$. 2. Existence of equivalent keys. Keys $k_i \neq k_j$ will be called equivalent if $E_{k_i}(\cdot) \equiv E_{k_j}(\cdot)$ as functions and the same for the decryption operation $D_{k_i}(\cdot) \equiv D_{k_j}(\cdot)$. 3. Without knowledge of some master secret, it would be computationally infeasible to derive an equivalent key $k' \neq k$ from a key $k$. 4. With knowledge of some master secret, it would be straightforward to derive at least "many" equivalent keys (let's just say that the equivalent keys should not be scarce compared to the size of the entire keyspace, e.g. 4 equivalent keys in the entire keyspace are not enough). 5. No trivial intermediate value: of course it's trivial to come up with a scheme satisfying all of the above in which all equivalent keys map to one intermediate value (for example take 128 bit keys with an additional 16 bits at the end, then discard these as a first step before applying standard AES-128). This is perhaps the most important feature and the hardest to define. Let's say then that it requires knowledge of some master secret or some special trapdoor to reduce equivalent keys to an intermediate value. Does anyone know of existing schemes/algorithms satisfying at least most of the requirements above, or any interesting papers on the subject, or just have some bright idea to share with us? - – fgrieu Sep 17 '12 at 11:49 You say that without knowledge of the master secret, deriving equivalent keys should be infeasible. It is a requirement that the fact that there are equivalent keys be kept hidden? In other words, an attacker with access to your source code might be able to look at it and say "hey, if only I knew how to factor this large number (for example), then I could find multiple keys that all resulted in the same encryption." Does the system need to be able to defend against this or can an attacker know that equivalent keys exist but just not be able to find them? – mikeazo♦ Sep 17 '12 at 12:20 The very existence of equivalent keys is not a secret in this envisioned scheme. It's deriving new additional keys or "merging" equivalent keys to produce new ones that should be difficult/impossible to an attacker. The problem to be solved, basically, and without dragging you all into too much detail, is within the realm of Conditional Access or DRM systems, where different users need to decrypt the same content (encrypted once for all of them), but using keys that are traceable back to them. As noted above, broadcast encryption schemes can handle this in some settings. – Harel Sep 17 '12 at 12:23 ## 2 Answers I don't know of any scheme that provides this, and it sounds tricky to build one. Here's the closest I can come up with. Let $n=pq$ be a 2048-bit RSA modulus, $d$ be a random 2048-bit number that is relatively prime to $(p-1)(q-1)$, and $e = d^{-1} \bmod (p-1)(q-1)$. The master secret is $d$; $n$ will be baked into the description of the encryption algorithm $E$. Let $\text{truncate}(z)$ be the result of truncating $z$ to 128 bits: say, deleting the low 1920 bits and keeping the high 128 bits. Then, define $$E_k(x) = AES(\text{truncate}(k^e \bmod n), x),$$ and similarly for $D_k(x)$. In other words, the key $k$ is a 2048-bit string. To encrypt under $k$, we raise the key $k$ to the power $e$ modulo $n$ (a textbook-RSA encryption of $k$), keep the first 128 bits, and use that as an AES key to encrypt the message $x$. Given a key $k$ and knowledge of the master secret, it is easy to derive many other keys that will be equivalent (just tweak the low bits of $k^e \bmod n$ randomly, then raise that to the $d$th power modulo $n$, and you've got an equivalent key). This satisfies your requirements 1-4, but not requirement 5. Why do you need your requirement 5? I wonder if we might be able to weaken the requirement somehow and still get something that is sufficient for your application. - Thanks - I'll have to think deeper about your answer, but for now, this is why I want requirement 5: if this were some scheme where equivalent keys were handed out, one to each user/customer, and these were leaked, then they could be traced. But if there's such an intermediate value, this kind of traitor tracing property is lost. In your example, if such a traitor leaked the 128 bits used as the AES key, it wouldn't be traceable to him. – Harel Sep 15 '12 at 20:10 1 If that's what you want, you should look at the literature on traitor tracing, watermarking, and such topics. There's been a great deal of research on solving that problem -- though it doesn't necessarily take the form you have described. For instance, Bluray's AACS scheme is a good example of a state-of-the-art approach to this problem. There is a lot written about AACS on the Internet. – D.W. Sep 15 '12 at 20:52 Thanks, but schemes based on broadcast encryption key distribution are another matter, and have their own practical considerations. I was still wondering if schemes/algorithms based "simply" on equivalent keys, as I defined, are possibles. Maybe the answer is negative. – Harel Sep 16 '12 at 8:01 @D.W. : You meant $e=d^{-1}\bmod \mathtt{LCM}(p-1,q-1)$ where there is $e = d^{-1} \bmod n$. – fgrieu Sep 16 '12 at 12:38 Yes, thank you @fgrieu! I've edited my answer to fix this. Thanks for pointing it out! – D.W. Sep 16 '12 at 23:38 show 1 more comment I think your problem might be solved by the Derived Unique Key Per Transaction (DUKPT) key generation algorithm. It doesn't work exactly the way you describe, but the keys might have the properties you're looking for. DUKPT is used in banking terminals to generate a unique key per message. It starts with a Base Derivation Key, which is the super secret master key for the whole system. When a PIN pad terminal is to have a new key injected, the BDK is transformed using the terminal number and the encryption algorithm (generally 3DES) in a non-reversible fashion. This generates a new key called the Initial PIN Encryption Key (IPEK). The IPEK is injected into the PIN pad (along with the key's ID), which immediately runs the transformation algorithm again, creating a set of (up to) 21 keys called Future Keys (FKs), and the injected IPEK is then discarded by both the terminal and the injection machine. Each FK is used only once to encrypt a single message (containing the customer's account number and PIN), and a transaction counter is increased. That FK is then discarded by the terminal. Once the set of 21 FKs is depleted, the algorithm is run again to generate a new set of FKs. The message sent to the host contains the key ID, the terminal number, the transaction counter, and the 3DES encrypted data. At the decrypting end, the key generation process is run however many times are indicated by the transaction counter, and the resultant FK is then used to decrypt the message. The strength comes from the non-reversible translation step. A successful attack on the encryption algorithm is required to recover the previous set of keys. As each older set of FKs is discarded, a terminal progressively drifts farther and farther from the BDK. Because the IPEKs are individually created for each terminal, no terminal's keys bear any relationship to any other terminal's keys. The compromise of one terminal will not yield a secret that can be used to break any other terminal's message. The compromise of one message will not yield information about any prior keys generated by that terminal. Because IPEKs are destroyed when they are turned into FKs during the initialization of the terminal, no terminal leaves the initialization facility without being at least twice removed from the BDK. The drawback is decryption effort. The decrypting host must run through the algorithm for every set of FKs generated in the lifetime of the machine. The HSMs used in the financial industry are purpose built to run the DUKPT protocol, and will handle thousands of transactions per second. - – Ilmari Karonen Sep 17 '12 at 15:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287330508232117, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80551
## Using slides in math classroom ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues). It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this. I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well. - 1 Now you can delete your comments about making it community wiki! – Kevin O'Bryant Nov 4 2011 at 20:49 1 I put a similar "math teaching style" question on math.stackexchange rather than the "research oriented" Mathoverflow, and it was closed as being too subjective and not "mathy" enough (despite getting many upvotes). Maybe I should move the discussion to MO ! – gordon-royle Nov 5 2011 at 23:00 1 @Andrew - I won't... just amused by the erratic application of the "rules" about questions that are not research-level mathematics questions (e.g. "Most memorable title"). The fact that this question and similar soft-questions get many upvotes and participation from great mathematicians such as Terry Tao, might perhaps lead to some introspection about whether the rules are too rigid. But I'm not going to get evangelical about it. – gordon-royle Nov 6 2011 at 12:39 1 There are no rules, only guidelines. I encourage you to join meta where you'll find that we debate them quite often (and whinge about exactly the point that you've just made). – Andrew Stacey Nov 6 2011 at 14:20 1 I'll post this as a comment rather than an answer, but Joseph Gallian has some recommendations and a short list of pros and cons in his Advice on Giving a Good PowerPoint Presentation (d.umn.edu/~jgallian/goodPPtalk.pdf). Might be of interest. – J W Nov 9 2011 at 17:49 show 3 more comments ## 17 Answers I think you already touched on the two main points: pretty pictures are so much better than anything done on a chalkboard is the pro, but you cannot decently unwind any argument on slides. I've used them intensively, I do it a lot less now. (Here's a con you did forget about: they take a lot of time to prepare, even when you're only revising them.) If the room lends itself well to it, the hybrid method is best: use the slides only when they beat the board. Rooms that have a screen in the corner, rather than in front of the board, are best for this. Also, it seems that it's easier to fall asleep to slides than to a lecture, so be aware of that. Make sure that the room is never too dark (the quality of the screen material can be critical here too: good screens should be readable in full light). And switching your routine, never showing slides for too long, helps keeping the students awake. - 7 Yes, slides take a lot of time to prepare, and not all students realize or appreciate this. They may think the instructor used something ready-made, similar to commercially available lessons on slides at the grade or high school level (which they may have seen in use). I know someone who took great pains to prepare slides from scratch for a post-calculus course, only to get teaching evaluations like "He teaches from Power Point, give me a break". This is to emphasize the point you are making, not to discourage the OP from using slides. – Margaret Friedland Nov 4 2011 at 16:31 8 I've also seen lecturers who "borrowed" slides from people who have taught the same course in the past. This inevitably leads to the embarrassing "what I think they're trying to say here is..." – Michael Lugo Nov 4 2011 at 18:36 Borrow slides by all means, but edit them yourself first! That takes time, but a lot less time (as I know from reusing my own slides and usually changing them a bit). – Toby Bartels Nov 7 2011 at 8:42 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Slides can, in principle, enhance a lecture, but there is one important difference between slides and blackboard that definitely needs to be kept in mind, and that is that slides are much more transient than a blackboard. Once one moves on from one slide to the next, the old slide is completely gone from view (unless one deliberately cycles back to it); and so if the student has not fully digested or at least copied down what was on that slide, he or she will have to somehow try to catch up in real time using the subsequent slides. Often, the net result is that the student will become more and more lost for the remainder of the lecture, or else is spending all of his or her time transcribing the slides instead of listening in real time. In contrast, given enough blackboard space, the material from a previous blackboard tends to persist for several minutes after the point when one has moved onto another blackboard, which allows for a less frantic deployment of attention and concentration by the student. If one distributes printed versions of the slides beforehand, then this difficulty is mostly eliminated. Though sometimes it takes a few lectures for the students to adapt to this. Once, in the first class in an undergraduate maths course, I said that I wanted my students to try to understand the lecture rather than simply copy it down, and to that end I distributed printed copies of the slides that I would be lecturing from. (The slides were in bullet point form, and I would expand upon them in speech and on the board.) I then found that for the first few lectures, the students, not knowing exactly what to do with their time now that they did not have to take as much notes, started highlighting all the bullet points on the printed notes. It was only after I threatened to distribute pre-highlighted lecture notes that they finally started listening to the lecture (and annotating the notes as necessary). - 1 I'm experiencing some of this right now. My students (at UCLA, by chance) actually told me that they want the slides to print before class, so they could annotate them. Others complained at the beginning that the slides vanish too quickly. After changing the format to make the lecture slower and more printable, I find that my classes have been much better. – Ryan Reich Nov 5 2011 at 23:44 1 @Terry: I'm curious, did you end up liking the lecturing approach described in your last paragraph (did students understand more), or did you drop it for some reason? (From my personal experience as a student: when I don't take notes I feel like understanding more during the lecture but forgetting faster afterwards. When I take notes I can't follow that much but have the impression that the knowledge stays for longer. I wonder if that's just my impression or if there are studies proving this effect.) – Michael Nov 7 2011 at 15:18 1 I haven't taught these sorts of classes for a while, but certainly if I had to teach a course for which I had previously already prepared such slides, I would try doing so again. It is true though that one needs to prevent material being forgotten after the lecture - good homework sets are one important way to combat this. – Terry Tao Nov 7 2011 at 23:52 I would never, never use slides for a course. That said: I do sometimes show my student pictures taken from the web. For example, I recently showed this picture to the students in my group theory class in order to illustrate the isomorphism between $A_5$ and the group of symmetries of a dodecahedron. Also, I sometimes prepare animations with Geogebra that I then show during class. Here's an example (click and drag the blue node). Of course, it's even better to create the graph in front of the students: Geogebra is good for that. My philosophy is that students should be shown things being created, not ready made. But I'll admit that this is not always possible... - 3 I made a model in paper of the five tetrahedra (following web.eecs.utk.edu/~plank/plank/origami) Nothing beats actually rotating the thing with your hands :D – Mariano Suárez-Alvarez Nov 5 2011 at 2:53 Anyhow, +1. ${}$ – Mariano Suárez-Alvarez Nov 5 2011 at 2:54 I'm going to try to answer the actual question rather than saying whether I think that chalk or projector is better. That "question" being: It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have notices, and ways you have found to optimize this. (Though I'm curious about the request for ways found of optimising pitfalls!) I switched to using beamer slides 2.5 years ago. I'm partway through the fifth course that I've given using slides (and the course immediately prior to those was given on chalkboard but having prepared them as slides - a half-and-half experiment). By-and-large, I would say that I give better lectures using the slides than I used to when giving chalk talks. The following is a fairly disorganised list of my thoughts on both why I chose to switch and things that I've learnt in the process. I hope that this will be of use to you. Feel free to contact me for more details, and we've also recently been discussing this a bit on the nForum. 1. A big reason for me switching was that I teach in English in a Norwegian University. Although the students have excellent English, it is not their native language. It takes them longer to copy from the board, and their error rate is higher, so more time in a chalk-talk is wasted waiting for them to catch up than I felt I could allow. Giving the lecture using slides meant that I had much more control over where the students were focussing at any particular time (mainly, I wanted this to be on me). (To be clear: the time taken was in addition to the necessary time for students to process ideas that they've just been told about. Of course, pauses are necessary. But pauses by happenstance - because the students are busy copying the board - are not the best pauses.) 2. As a consequence, I always make my slide notes available beforehand. Admittedly, sometimes it was at 11pm before an 8am lecture, but no-one's perfect! They can get the actual presentation, a compressed version (the `trans` option), and a handout version (they are strongly encouraged only to print the latter). That way, they can read in advance what I'm going to show them, and they can bring the handout version along to add any additional notes if they wish. 3. The handouts are not a substitute for going to the lecture. The slides are not a summary of the lecture, they are what I want the students to be able to see while I am talking to them about something. Ideally, when the students look at the notes afterwards then they will be able to remember (more-or-less) what I've said. But if they weren't at the lecture then they won't have anything to remember so the handouts will be of less use (not of no use, it will still say what topics were covered so they can find out about them by other means). 4. Lectures never go completely as planned. But never use the chalk-board and the screen. Whenever I see someone doing this at a conference I want to run out of the lecture hall screaming. Not only will the lighting be completely different for both, but also the students will have the wrong mindset and will take time to make the switch. Use a system whereby you can write on the presentation (and can bring up blank pages if needed). You can even leave deliberate gaps if you want! As well as not requiring a change in gear, you can then make the annotations available afterwards (and have an easy record of the annotations that you made when you revise the slides for next year). I've used xournal (for Linux), jarnal (when forced to use Windows), and am currently using an iPad (despite what's said elsewhere, this is extremely usable for this). (Incidentally, I'd say that going the other way is acceptable: if you are primarily using the board and then want to show a couple of fancy pictures then so long as it doesn't take an age to set-up the projector then it's okay.) 5. Practise. Get a system so that your writing on the screen is acceptable (don't worry about perfect), you know how your program works, and you can change pages easily (preferably without looking at the machine). 6. Yes, it usually takes longer to prepare the slides - first time. But once you're used to the flow of writing a beamer presentation then that aspect doesn't actually add that much more. What probably adds the most time is that you are now forced to completely prepare the lecture in advance, rather than "winging it" and claiming that it is "good for the students to see the professor make mistakes". (You can probably guess my reaction to those!) It can take some effort to get a really nice system, I think I have one, and now it doesn't take me long to prepare a presentation. On that note, preparing your notes in LaTeX makes it much easier to prepare it in "layers". First, lay out your lesson plan (you do have one, right?), then add the frame titles, finally add the content of each frame. Then go back and adjust the lesson plan according to what did and didn't fit as you expected. (And it's possible to produce the lesson plan from the same source as the presentation.) And when you come to reuse the slides, it's much faster. 7. Think always "What can the students see right now?". If you want them to be able to refer to more than you can fit on a slide, consider giving them a "cheat sheet" handout as well. Slightly ironically, giving the lectures using LaTeX means that I am much more aware of how the presentation looks, something that is just as important as what is in it. 8. As hinted above, my slide notes would not form a good set of "traditional lecture notes" from which to revise. But then I don't believe that the chief aim of a lecture should be to produce that. Again, consider using other methods for this. For my course, I have a wiki where I can put more lengthy arguments. I use homework questions to "force" the students to read the wiki. That's all that I can think of right now. You can get an idea of what my lectures look like by visiting the home page of my current course: http://mathsnotes.math.ntnu.no/mathsnotes/show/TMA4145+Home+Page. What I've said above is phrased a bit like advice, but it's really just a list of things that spring to mind when I think about how I've adapted. I will give one genuine piece of advice: don't base your lectures on what worked best for you. The reason why should be obvious! But to illustrate the absurdity, let me note that the undergraduate course in which I learn the most and where I really feel that I understood and still understand the topic the best, was the worst lecture course that I ever went to. Why? Simple: because I couldn't learn from the lecturer, I was forced to go and learn it by myself. So now I mumble, write illegibly, stop halfway through a proof, and get wildly sidetracked by irrelevant questions - because that's what worked best for me! - 2 What iPad software do you use? Do you use a stylus - I can't write legibly with my fat fingertip.. – gordon-royle Nov 5 2011 at 22:56 I appreciate all these tips (advice or not). I'm just trying for the first time to give a beamer course and this is a very timely question for me. – Ryan Reich Nov 5 2011 at 23:39 Anyone curious as to my iPad set-up should take a look at this: tex.blogoverflow.com/2011/10/i-tex-therefore-ipad – Andrew Stacey Nov 6 2011 at 8:15 @Andrew: I am very amused by your intransigence. I can understand drawing personal battle lines and saying: "I won't do x", but your answer here sounds a lot more like: "no-one should do x". All rules have their exceptions, and I'm quite willing to let lecturers operate on their own terms. I must have seen every single approach to lectures done brilliantly, and every one of them done poorly, depending on who was giving the talk... – Thierry Zell Nov 7 2011 at 16:30 Thierry: Did you read the first sentence of my last paragraph? Part of my intransigence is the fact that I always feel in these discussions that there is a strong "Don't use slides" answer (even though the questioner asked directly for experiences of using slides) so I was partly trying to counter it. I wish I had seen every single approach to lectures done brilliantly (and poorly). I'm afraid that my experience has been very limited so I find myself having to invent my own techniques, and when I do I like to share what I've learnt. – Andrew Stacey Nov 7 2011 at 19:00 show 1 more comment I use a hybrid version for some of my classes which take place in a room that allows this: I use computer slides (and animations, computations, etc.) and the board. I learned this from my colleague Serkan Hosten, and it works really well in some classes. E.g., I use slides for definitions and theorems (including the relevant ones from the previous lecture) but then work out examples and proofs on the board. This has the obvious advantage of spending time on exactly the items that need time and just the right pauses to get digested, but it also has nice side effects: e.g., the statement of the theorem will stay on the screen even if I'll have to clean the board. - 3 +1: "the statement of the theorem will stay on the screen even if I'll have to clean the board". That's excellent practice. (Only very experienced teachers will know not to erase the board entirely, and keep the important definitions in some corner -- this requires a lot of skill). – André Henriques Nov 4 2011 at 20:21 4 Only very experienced teachers will know not to erase the board entirely, and keep the important definitions in some corner -- this requires a lot of skill ... and a lot of room... – Thierry Zell Nov 4 2011 at 21:54 1 And room is harder to get than skill! – Mariano Suárez-Alvarez Nov 5 2011 at 2:52 1 @Thierry: I agree with you that it's easier to do this with a big blackboard that with a small one. But it's not impossible. The hardest thing is to arrange that, when you first write the definition, you do it in a corner (and without taking too much space). Only then will it be possible to erase everything but that definition, and go on with the lecture. – André Henriques Nov 5 2011 at 10:55 I just decided this quarter to use slides for my calculus class, a large-lecture course of the sort I'd never done before; I figured it would be easier to see the "board" if it were on the big screen. Here is the progression of my mistakes and corrections: • My first lectures had too many words. Slides are great for presenting the wordy parts of math, because they take so long to write and then the students have to write them again. What is not great about them is how much they encourage this behavior. • Since I was giving a "slide presentation" or a "lecture" rather than a "class", my mindset was different: sort of presentation-to-the-investors rather than gathering-the-children. My slides went by too quickly. • I eventually slowed myself down by basing the lectures around computations rather than information. Beamer is pretty good (though not ideal) for this, because you can uncover each successive part of an equation. If you break down your slides like this, it is almost as natural as writing on the board. • My students themselves actually brought up the point that Terry Tao mentioned in his answer: the slides were too transient. They also wanted printouts. Having to prepare the slides for being printed in "handout" mode changed how I organized them: for one, no computation should be longer than one frame (something I should have realized earlier). Also, there should be minimally complex animations, since you don't see them in the printout. • Many of them expressed the following conservative principle: they had "always" had math taught on the board and preferred the old way. So I've started mixing the board with the slides: I write the statement of the problem on a slide, solve it on the board, and maybe summarize the solution on the slides. This works very well. • Now I can reserve the slides for two things: blocks of text (problem statements, statements of the main topic of the lesson) and pictures. TikZ, of course, does better pictures than I do, especially when I lose my colored chalk. Preparing these lectures used to take me forever. Using beamer does require that you learn how it wants you to use it: don't recompile compulsively, because each run takes a full minute, and don't do really tricky animations. Every picture takes an extra hour to prepare. If you stick to writing a fairly natural summary of a lesson, broken by lots of \pause's and the occasional `\begin{overprint}`...`\end{overprint}` for long bulleted lists, an hour lecture will take about two hours to prepare. - Very nicely detailed answer! I like the slide summary of the solution, since it goes back to that cardinal principle of teaching: tell them what you're going to tell them, then tell them, then tell them what you just told them. On another note: Beamer is an incredibly flexible tool when it comes to handouts, and you can easily include slides that are handout-only if you need to convey extra information (e.g. a few frames from an animation.) Also, compiling in draft mode can really speed up those slides preparations! – Thierry Zell Nov 6 2011 at 1:51 I think I like your points in the order that you gave them. The first two are extremely important and ones that I should have thought to list as the major change from my notes from last year is that I'm going through taking out all unnecessary verbiage. From your description at the end, I'd say that you were now giving a board talk supplemented by slides. I'm surprised to hear that UCLA students are so conservative! I'd've thought they'd be more open to someone trying new ways of teaching them. – Andrew Stacey Nov 6 2011 at 8:25 @Andrew: Of course, any communications with students are filtered through the "boldness" mask, and you always hear from the discontents. I don't know what the silent majority thinks, and I can't really find out, though the other day in office hours, I had one guy who said he was surprised to see us "still" using blackboards in college. I asked what he expected, perhaps slide presentations? – Ryan Reich Nov 6 2011 at 15:40 1 You can find out what the "silent majority" thinks. As Feynmen was once told: you just ask them. It's amazing how many ideas and techniques there are for getting information out of students. The problem is that no-one ever tells us about them! – Andrew Stacey Nov 6 2011 at 18:43 1 Just ask! Half way through the lecture then stop, hand out a questionnaire, and say "I'd really appreciate it if you could all take a couple of minutes to fill this in. Although you all fill in the student evaluations at the end of the semester, that doesn't help a lot with this semester." Give them a candy bar while they do it. Go out of the room while they do it. Have them get into small groups to discuss what they do and don't like. Get someone else in to ask them the questions. Ultimately, if you show that you care and that you will take on board what they say, they'll say it. – Andrew Stacey Nov 7 2011 at 8:52 show 2 more comments Full disclosure: I stole the following idea from my wife. For some courses, like calculus, I will create slides with beamer, leaving blank spots to fill in during class. I then print the slides out on paper and present them with the document camera during lecture. When I get to an example, I will work it out by writing on the paper during class, and have it projected in real time. This approach combines the advantages of blackboard talks where you work things out in realtime, with the advantages of beamer presentations where you can present nice graphics and also have an outline to limit getting distracted and wandering off on tangents. - 3 I get all my best teaching ideas from my wife too. – Andrew Stacey Nov 5 2011 at 22:19 I took a course at PCMI some years ago from David Perkinson (Reed College). He did an amazing job and single-handedly convinced me it was possible to teach well from slides. Check out this link to see examples of his slides. As the other answers have mentioned, it seems necessary to use slides only in conjunction with the board. Perkinson did this, but also included a useful trick: he created handouts from the slides for students to write on, but left blank spots in those handouts so they had to write the proofs themselves based on what he said, showed on the slides, and wrote on the board. Professor Perkinson is also a wizard of sorts with mathematica, and he was able to create awesome graphics using it. I don't think his mathematica code is online, but I'll bet he'd be willing to share if someone emailed him. He may also have tricks to reduce prep time, as this was the sort of thing he liked thinking about. - I really enjoy Mathematica's graphing capabilities. Even without being a wizard, Mathematica strikes me as especially good (better than Maple, I must admit) when it's time to combine different graphics into one picture (say curve plus scatterplot plus text). Especially useful when designing graphics for the classroom or a test. – Thierry Zell Nov 4 2011 at 16:47 I just finished teaching a course on linear algebra to non-math students. I used a combination of latex-beamer slides and blackboard. One advantage of the slides was being able to do examples of Gauss elimination and inversion of matrices quicker than on the blackboard and without making mistakes. On the other hand, I feel that slides can easily make a lecture less interactive. And, I must agree with Thierry Zell: it took quite some time to prepare these slides, even though I could adapt the latex sources from the previous people teaching this course. - My solution is to use a tablet PC (the pen-enabled kind, not the modern entertainment tablets like the Ipad),hooked up to a data projector. I have "lecture templates" which contain the copying intensive stuff (statements of theorems, definitions, graphs, complex diagrams) on the page, along with plenty of blank space for annotation. Those are on a website prior to the lecture. The students print them off at home, and bring them to class. I then annotate the lecture notes (using a pdf annotator and the tablet pen) and the students take notes as they wish. This, I feel, combines the benefits of having some complex material prepared ahead of time with the benefits of having arguments, calculations etc. developed in real time, rather than canned in advance. So it avoids the canned slides-whizzing-by problem. The only disadvantages I can see are the limitations of screen size. Sometimes nothing replaces the virtue of a big whiteboard, and having every part of a long development in front of your eyes all at once. In that case, I use a whiteboard. - 1 An iPad works just fine, by the way. – Andrew Stacey Nov 5 2011 at 22:18 The quality of "inking" I've seen with the after-market pens for the Ipad is pretty miserable. Not surprising given that they weren't designed for digital inking (and given Job's antipathy to the digital pen.) But perhaps there's something better I haven't seen. – GMark Nov 6 2011 at 1:51 I don't know what "inking" refers to. Certainly I can get better control with a graphics tablet (no need for an actual tablet computer), but it's all a balance. I get good enough quality with an iPad that it's other benefits mean that on the whole it is better than my laptop. (See my comment on my post if you're interested in more details on my iPad experience.) – Andrew Stacey Nov 6 2011 at 8:20 I don't know what "inking" is either, but for the record, the quality (response, especially) of input devices for the ipad is, in my experience, extremely variable. My first cheap stylus didn't cut it for very long. – Thierry Zell Nov 9 2011 at 19:12 If you intend to post your slides online after class then you run the risk of students not even taking notes/digesting the material on their own (I've had this feeling myself) or feeling that they don't have to attend class. This is obviously a con but the other side is that the students then have a good outline of what you talked about in class with your emphasis included. I second Thierry's comments. - 2 Frankly you suggesting to keep some secrets from the students to make them attend the class I think it is a wrong idea. – Anton Petrunin Nov 5 2011 at 1:20 1 @Anton: I don't think what is suggested here is keeping secrets. But providing slides handouts is something that I've only ever done reluctantly, because the danger is to provide a fake sense of security to students, that the handouts contain everything they need to know. They seldom do. – Thierry Zell Nov 5 2011 at 2:36 2 @Anton I am not suggesting keeping secrets from the students but rather not giving them a finished product along with the fake sense of security that Thierry has mentioned. My issue with handing over the finished notes/slides is that it does not provoke the students to spend time thinking instead of memorizing. – BSteinhurst Nov 5 2011 at 17:44 I have a story in the middle. I hurt my right shoulder over time, by 1994 it was simply too much to write on a blackboard, at least overhead. So, pre-Beamer, I wrote up these slides on transparencies with colored pens. These were unusually well-prepared lessons for me, I had everything worked out, it was all clearly my work, and I still had plenty of blank slides on which to write new material when needed. That is the hardest I have ever worked on course preparation. They did have class questionnaires, sent to administration and never seen by me, later the chairman told me how very much the students hated the slides. They were never fond of me but I think that was a separate item, the slides made it worse than it would have been...I suppose my question now is, would things have been different if I also gave each student photocopies of the slides for that day? - 3 The secret is giving them icecream... – Mariano Suárez-Alvarez Nov 5 2011 at 2:49 3 Given where I was working, chicken-fried steak. – Will Jagy Nov 5 2011 at 3:01 Chicken fried ice cream? – Spice the Bird Nov 6 2011 at 1:20 ice fried chicken cream? a bad joke perhaps. – Shripad Nov 6 2011 at 11:23 I've already given my opinion, and this is more of a remark: how the pros and cons are weighed between blackboard and slides should be influenced by a whole collection of classroom factors, and the first one among them should probably be class size. This is a rather obvious remark, but I thought it was worth pointing out; Jaap Eldering's answer brought it to the forefront for me, because he mentioned doing examples on slides to avoid making mistakes, and my first reaction was: "making mistakes in class is good!". And then it occurred to me that I can use mistakes in the classroom fairly effectively because I only teach small classes. In a big classroom, I would simply not be able to receive instant feedback efficiently enough to do this as well, and I would not be comfortable trying. In a very large lecture hall, the blackboard will often lose a lot of its advantages given how large you have to write. - Indeed there's didactical value in making mistakes. But e.g. in my case of a simple calculation error when inverting a matrix, I think the time consumed to find the error outweighs to advantages. – Jaap Eldering Nov 5 2011 at 10:13 @Jaap: of course, especially since finding a mistake in a matrix computation can take such a long time. – Thierry Zell Nov 5 2011 at 14:14 1 I disagree that there is a didactical value in making mistakes in a lecture. There is a didactical value in showing how one might go about solving a problem including all the mistakes that you might have made or that you did make when you first tried to solve the problem by yourself. – Andrew Stacey Nov 6 2011 at 18:45 1 @Andrew: you're very free to disagree, of course. But I learned a tremendous amount on how to fix algebraic mistakes from a fantastic Physics teacher who could never write two lines without dropping a sign or losing an exponent, yet could go back and (1) explain how he could tell there was a mistake in the result and (2) track down every single one, and thus accurately derive any physical law you care to name. He did not do it deliberately, he was really that careless. I don't do this in my own classes, but I've always been in his debt and I am sure to seize opportunities when they arise. – Thierry Zell Nov 6 2011 at 19:38 1 @Andrew, continued: I don't tend to make mistakes on the board, but I try to forget as much as possible how to solve specific problems, so that I may very well start out on completely the wrong track. I think this is more valuable than slickly solving every problem being thrown at you, since it resembles the students' own experiences when facing a new problem. – Thierry Zell Nov 6 2011 at 19:42 show 3 more comments Finally, I question in MO I feel qualified to answer! I am a PhD student in Ireland doing an amount of lecturing. As a first remark, I am lucky in the sense that undergraduate maths was never especially easy for me and therefore I empathise with the average student. My second remark is that I hope for a career lecturing in the Irish Institute of Technology sector where the role in primarily teaching as opposed to the university sector where research is the primary role. Hence I am acutely interested in the skills as a mathematics teacher. The second half of the answers here are closer to my philosophy than the first. A particular distinction must be put on the classroom environment and facilities. Regardless, my first instinct is that slides alone is sub-optimal. The alternative to this is to produce everything on the blackboard. I did this last year in a differential calculus module (the students were maths studies --- by and large headed towards a career as "high school" mathematics teachers). The emphasis in this course is to convey to the students that although differential calculus is a relatively intuitive subject with the motivation coming from geometric concerns, as mathematicians we must also be rigorous, logical and precise in our thinking. Hence, we are not merely making a series of calculations and passing exams --- we must understand the content. When I wrote blackboard after blackboard of notes, the students did not have any chance of understanding the material. While I am a fervent believer that exercises and reflection are the best way for a student to achieve this aim, I am reminded of my undergraduate experience where certain obstacles lay in the path of me putting in this work and luckily my presence at lecture-time was sufficient for me to grasp the general theory and progress (eventually with first class grades) despite less than exemplary exam results in previous years. Put simply, ordinary students do not have the faculties to take down written notes and consider the important comments of the lecturer in real time. However, slides do not work because mathematics is not a spectator sport (not a cliche when the average student is first interested in passing exams --- its is the goal of the educator to transcend this). It takes a superlative lecturer and a cohort of motivated and enthusiastic students to assimilate a lecture purely by ear. At least once I had a lecturer of this standard but I would vouch that were engineering, scientific or humanities students subjected to his fantastic delivery and questioning, they would simply fall asleep. It is a curse but a fact (among my students at least --- none of which are Math majors), that the average student does not have that aptitude to bask in such splendour. My compromise, therefore is the very similar to what has been suggested above. I produce a set of notes (available soft-bound in a local printing house), with gaps which we fill in during the class (I print the notes onto an acetate sheet which I project onto a screen and can write on with a marker). All the theorems are writ-large, and everything else is teased out per a blackboard with suitable prior fillings in to both give the students a sneak preview and for the practical reasons of properly spacing out my scribblings. Does the need arise, I can put more complicated graphics in this set of notes. Today we introduced implicit differentiation and I projected this Wikipedia page list of curves onto the screen and this was but a two minute interlude. The issue of students looking ahead was served by a motivation at the start of term (we are studying continuous and differentiable (smooth) functions. We draw a picture. We translate these geometric pictures into an algebebraic ones and never lose sight of this fact). I have covered more content this year than last using this method, the first continuous assessment results showed a marked improvement and I am ahead of schedule despite being able to allocate a lot more time to comments and explanation of subtleties. - It also depends on how do you think it is the best for your students to learn : By listening (hopefully carefully) to the course, and then reading notes you'll provide them, OR by letting them write themselves the content. I don't like to much the first option, certainly because I've not been used too, and I believe it is a huge advantage to write yourself everything at the moment, because of obvious memorization advantages (it was important for me to have my own notations, a kind of taming procedure) and, once you read your notes again, you usually remember where was the parts the teacher got enthusiastic. Considering then the second option, it is for me an evidence that blackboard win : • you give the time to the students to write since you do it yourself • the statements stay longer (at least if you have enough blackboards, or just keep the main Theorem on !) • there is more interactions content-author-students • your eyes are not constantly dried by this terrible white light • it allows improvisation • it is more classy (personal point of view, I agree) Against : • it is suicidal (that is terribly soporific for the students) to NOT prepare a lot your presentation, at least as long as you should spend time one slides • it requires a good handwriting from the teacher • its not convenient for drawing complex pictures My conclusion is then the same than André Henriques ! - Like Terry Tao, I find the transience of slides to be a problem. This is one reason why I stopped using slides as such and began using a single continuous-scroll page for each topic. I lecture from the bottom of the page, so students who are behind can still see the top. (I'm also one of those people who mixes the projector and the board, with bullet points and formulas on the projector and worked-out examples on the board, so I don't scroll down the page very quickly. Fortunately I work in a facility where the lighting allows this.) - Most of the non-mathematics courses I've taken in college were done with lecture slides, and I have to say that there are a number of advantages and disadvantages to them that actually amount to more disadvantages if you were to do the same in math. The one obvious advantage is that the slides can be posted online, but the problem with this is that it encourages students to skip class. Even those who don't skip class won't take notes (and are sometimes even encouraged to not take notes by the professors), and this would not be good in a math class, because many people feel that copying down proofs from lecture is best way to get a better understanding of them. Also, when you have lecture notes, you can sometimes get nonsense like this. Anyways, back to your point. If your main concern is displaying graphics, you could possibly just use slides for graphics. If you can lecture in a room with a projector screen that doesn't obscure the blackboards, that would be ideal for this. Alternatively you can distribute handouts at the beginning with graphics that you will be referencing. - Indeed: taken out of context, these slides can be rather puzzling! I don't know if "nonsense" is quite the right term, though... Disjointed, maybe... – Thierry Zell Nov 9 2011 at 17:28 2 In lower level math courses at a US university, most students do not want to be in the course to begin with. Providing slides, or even lecture notes, on-line would make it very tempting for them not to come to class. This is a consequence that must be weighed in any decision to use slides. If a student is physically present, you still have an opportunity to reach them. – Chris Leary Nov 9 2011 at 18:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9678662419319153, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/59832-mappings-question-urgent.html
# Thread: 1. ## Mappings question - Urgent Hi, The question is: Let S, T and U be sets and let a and b be mappings from S to T and from T to U respectively. Assume that the mapping a is not onto. a) Give an example of sets S, T and U and mappings a and b, such that b o a is onto. b)Can the mapping b o a be invertible, and, if yes, which additional properties do a and b possess? c)If the answer to (b) is yes, give examples of sets S, T and U and of mappings a and b for which b o a is invertible and for which it is not. (the answer to (a) may be one of the examples. The sets s, T and U need not be the same as in (a)) Can anybody help with one please? Thanks in advance 2. Hi a) A basic example: $S=\{1\}, T=\{1,2\}, U=\{1\};\ a:1\mapsto 1$ (there is no choice for $b$, what is it?) b)In the given example, $b\circ a$ is invertible. Why? Furthermore, $b\circ a$ is invertible iff it is onto and one-to-one. We've just seen that $a$ is not forced to be onto. What about $b$? And has $a$ to be injective?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469347596168518, "perplexity_flag": "head"}
http://nrich.maths.org/2383/note
### F'arc'tion At the corner of the cube circular arcs are drawn and the area enclosed shaded. What fraction of the surface area of the cube is shaded? Try working out the answer without recourse to pencil and paper. ### Plutarch's Boxes According to Plutarch, the Greeks found all the rectangles with integer sides, whose areas are equal to their perimeters. Can you find them? What rectangular boxes, with integer sides, have their surface areas equal to their volumes? ### Take Ten Is it possible to remove ten unit cubes from a 3 by 3 by 3 cube made from 27 unit cubes so that the surface area of the remaining solid is the same as the surface area of the original 3 by 3 by 3 cube? # Cuboids ### Why do this problem? This problem requires a lot of calculations of surface areas, within a rich problem solving context. ### Possible approach Work with a specific cuboid, eg $2 \times 3 \times 5$, or a breakfast cereal box, to establish how to calculate surface area of cuboids. Students could practise working out surface area mentally on some small cuboids made of multilink cubes. Present the problem, ask students to keep a record of things that they tried that didn't work (and what was wrong) as well as things that did work. In this initial working session, try to ensure that students are calculating surface area correctly. This spreadsheet may be useful (for teachers' eyes only!). It may be appropriate to draw a ladder on the board, with this on the steps (starting from the bottom): - calculations going wrong - no solutions yet - one solution - some solutions - all solutions - why I am sure I have all the solutions - I'll change the question to... A short group discussion could suggest strategies to help students move on up the ladder, before they continue with the problem. This might be a good lesson in which to allocate fiveminutes at the end to ask students to reflect on what they have achieved, which methods and ideas were most useful, and what aspects of the problem remain unanswered. ### Key questions • Have you found none/one/some or all of the solutions • Is there a cube that will work? • How might you organise a systematic search for the cuboids with surface area $100$? ### Possible extension The main extension activity could focus on the convincing argument that all solutions have been found. Once this has been answered, you might like to consider these extensions: • Express the method for calculating surface area, algebraically. • What surface area values will generate lots of cuboids and which give none or just one? • Could you set up a spreadsheet to help with the calculations? ### Possible support In groups, or as a class, keep a record of all cuboids whose surface areas have been calculated. Award tenpoints for a bulls eye "$100$", fivepoints for each $95-105$, and twopoints for $90-110$. Any miscalculated results could lose points, providing motivation for peer checking, and helping each other. A sheet showing a net of a cuboid, like this , may help students to organise their working and ideas. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481551647186279, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/18559/is-there-a-machinery-describing-all-the-irreducible-representations
## Is there a machinery describing all the irreducible representations ? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose we have a finite dimensional Lie algebra $g$, Is there a machinery to describe all the irreducible representation of $g$. Consider toy example: $sl_{2}$ or $sl_{3}$, how do we describe all the irreducible representations of them. Further, consider quantum case, Is there a machinery way(like algorithm)describing all the irreducible representations of $U_{q}(sl_{2})$ EDIT: What I am looking for is an "mechanical" and canonical machinery describing all the irreducible representations(of course, not only finite dimensional representations,not only unitary representations) EDIT2: What I am looking for is some reference to describe them in explicitly(such as $sl_{3}$) - 3 You need to be more specific. The two Lie algebras you mention are simple. Are you asking about semisimple Lie algebras? Also what class of representations are you asking about? Do you want finite dimensional, or unitary representations on a Hilbert space, or ...? The finite dimensional irreducible representations of semisimple Lie algebras are well understood and are parametrised by dominant integral weights. – Bruce Westbury Mar 18 2010 at 8:39 2 Check out A.Rosenberg's "noncommutative algebraic geometry and representation theory of quantized algebra" He used spectrum of $U(sl_{2})-mod$ and $A_{n}-mod$ to describe all the irreducible representations of $sl_{2}$ and n-th Weyl algebra. In fact, irreducible representations of semisimple Lie algebra can be reduced to representations of Weyl algebra. Then gluing back. Actually,using his machinery, it is very convienient to find all irreducible representations of $sl_{3}$ – Shizhuo Zhang Mar 18 2010 at 14:56 It is still not clear to me what you mean by "all" representations. Do you mean irreducible representations on a naked C-vector space with no additional structure? Is this kind of result known for any algebraic structure? – Qiaochu Yuan Mar 18 2010 at 19:06 @Shizhuo Zhang. Has this paper been published? Do you possibly have a link to it? – Grétar Amazeen Mar 18 2010 at 23:25 @Amazeen: Yes, it is a book, you can find it in your library or Amazon. Actually, you can also take a look at here: mpim-bonn.mpg.de/preprints/send?bid=3217. It is a categorical version of what I mentioned above. – Shizhuo Zhang Mar 19 2010 at 4:34 ## 2 Answers The problem of classifying irreducible $sl_2(\mathbb C)$-representations is essentially untractable as it contains a wild subproblem. Indeed, the action of the Casimir element $C$ on any irreducible representation is by a complex scalar (by a theorem of Quillen I believe). If we consider the case when $C$ acts by zero, by a result of Beilinson-Bernstein the category of $sl_2$-representations with $C=0$ is equivalent to the category of quasi-coherent $\mathcal D_{\mathbf P^1}$-modules. In this $1$-dimensional case every irreducible $\mathcal D_{\mathbf P^1}$-module is holonomic. If we restrict ourselves to irreducible regular holonomic modules we have two possibilities. One case is that they are supported at a single point and then the point is a complete invariant. In the other case they are classified by a finite collection of points of $\mathbf P^1$ and equivalence classes of irreducible representation of the fundamental group of the complement of the points which map the monodromy elements of the points non-trivially. In particular we can consider the case of three points in which case the fundamental group is free on two generators (they and the inverse of their product being the three monodromy elements). The irreducible representations where one of the monodromy elements act trivially correspond to removing the corresponding point and thinking of the representation as a reprentation of the fundamental group of that complement. Hence, we can embed the category of finite-dimensional representations of the free group on two elements as a full subcategory closed under kernels and cokernels of the category of $sl_2(\mathbb C)$-modules. This makes the latter category wild in the technical sense. However, the irreducible representations of the free group on two letters are also more or less unclassifiable. There is no contradiction between this and the result of Block. His result gives essentially a classification of irreducibles in terms of equivalence classes of irreducible polynomials in a twisted polynomial ring over $\mathbb C$. So the consequence is that such polynomials are essentially unclassifiable. [Added] Intractable depends on your point of view. As an algebraic geometer I agree with Mumford making (lighthearted) fun of representation theorists that think that wild problems are intractable. After all we have a perfectly sensible moduli space (in the case of irreducible representations) or moduli stack (in the general case). One should not try to "understand" the points of an algebraic variety but instead try to understand the variety geometrically. Today, I think that this view point has been absorbed to a large degree by representation theory. - 2 Is there a reference for a precise statement of Mumford on this? – A Stasinski Apr 10 2010 at 16:11 @Stasinski: Sorry, I remember reading it (so it is not by word of mouth) but I do not remember where. – Torsten Ekedahl Apr 13 2010 at 4:14 1 Perhaps this footnote at the bottom of page 213 of "Geometric Invariant Theory"? books.google.com/… – jc Mar 16 2011 at 20:26 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The short answer is no. There is a classification of primitive ideals in the enveloping algebra (and quantised enveloping algebra). This reduces the problem to primitive rings. However the representation theory of primitive rings which are not Artinian is complicated. An example which I find easier is the Weyl algebra (or linear differential operators). This ring is primitive since the vector space of polynomials is an irreducible faithful representation. This ring is in fact simple (no proper ideal). However the representation theory encompasses the theory of linear differential equations with polynomial coefficients. So speaking heuristically, the representation theory of semisimple Lie algebras is at least as complicated as the representation theory of the Weyl algebra and it is unreasonable to expect an answer in this case. I don't know of a formal result that says this is an unreasonable request. For example: does this problem include the problem of classifying indecomposable representations of a wild algebra? Edit I have just found this reference which solves the question for $sl(2)$. MR0605353 (83c:17010) Block, Richard E. The irreducible representations of the Lie algebra $sl(2)$ and of the Weyl algebra. Adv. in Math. 39 (1981), no. 1, 69--110. - Bruce: while I didn't downvote, this answer is a bit terse and it might not be clear to the original poster how examples like the Weyl algebra fit in to the original question about sl_2 and sl_3, – Yemon Choi Mar 18 2010 at 9:51 Yemon, Thanks. – Bruce Westbury Mar 18 2010 at 10:22 2 One way to connect: The Weyl algebra is a quotient of the enveloping algebra of the three-dimensional Lie algebra (and, more generally, the primitive quotients of the enveloping algebras of nilpotent Lie algebras are Weyl algebras of various dimensions) so the complexities of the representation theory of the Weyl algebras are smaller than that of Lie algebras. – Mariano Suárez-Alvarez Mar 18 2010 at 11:57 V. V. Vavula has also given all simple modules over the Weyl algebras and $|mathfrak{sl}_2$, in the context of his generalized Weyl algebras. – Mariano Suárez-Alvarez Mar 18 2010 at 13:34 @Mariano It should be mentioned that the technique described by Shizhuo in the comment on the original question is using the hyperbolic structure, which is essentially GWA structure. I wonder if their results are parallel. (Yes I know this was over a year ago, but this site is like the cave of wonders, always finding new treasures hiding around!) – B. Bischof Nov 24 2011 at 2:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9071340560913086, "perplexity_flag": "head"}
http://mathhelpforum.com/math-challenge-problems/182995-improper-integral-4-a-print.html
improper integral #4 Printable View • June 13th 2011, 08:57 PM Random Variable improper integral #4 Evaluate $\int_{0}^{\infty} \frac{\ln x}{(x+a)(x+b)} \ dx \ \ a,b>0 \ (a \ne b)$ Unlike the last integral, try not to use contour integration. • June 14th 2011, 02:38 PM Drexel28 Re: improper integral #4 Quote: Originally Posted by Random Variable Evaluate $\int_{0}^{\infty} \frac{\ln x}{(x+a)(x+b)} \ dx \ \ a,b>0 \ (a \ne b)$ Unlike the last integral, try not to use contour integration. Spoiler: We first make the observation that $\displaystyle \frac{1}{(x+a)(x+b)}=\frac{1}{b-a}\int_a^b\frac{dy}{(x+y)^2} \text{ }dy$ And thus, if $I(a,b)$ is the given integral we have that $\displaystyle I(a,b)=\frac{1}{b-a}\int_0^\infty \int_a^b \frac{\log(x)}{(x+y)^2}\text{ }dy\,dx$ Now, since $\displaystyle \int_{\mathbb{R}\times[a,b]}\left|\frac{\log(x)}{(x+y)^2}\right|\text{ }d(x,y)<\infty$ (which is easily verified via estimation) we may apply Fubini's theorem to conclude that $\displaystyle I(a,b)=\frac{1}{b-a}\int_a^b\int_0^\infty \frac{\log(x)}{(x+y)^2}\text{ }dx\,dy$ But, it is easy to see that $\displaystyle \int\frac{\log(x)}{(x+y)^2}\text{ }dx=\frac{x\log(x)}{y(y+x)}-\frac{\log(y+x)}{y}$ So that $\displaystyle \int_0^{\infty}\frac{\log(x)}{(x+y)^2}\text{ }dx=\frac{\log(y)}{y}$ and so $\begin{aligned}I(a,b) &=\frac{1}{b-a}\int_a^b\int_0^\infty\frac{\log(x)}{(x+y)^2} \text{ }dx\\ &=\frac{1}{b-a}\int_a^b\frac{\log(y)}{y}\;dy\\ &=\frac{\log^2(b)-\log^2(a)}{2(b-a)}\end{aligned}$ • June 14th 2011, 04:30 PM Random Variable Re: improper integral #4 There is an easier approach. Make the substitution $u = \frac{ab}{x}$ and see what happens. • June 14th 2011, 04:58 PM Drexel28 Re: improper integral #4 Quote: Originally Posted by Random Variable There is an easier approach. Make the substitution $u = \frac{ab}{x}$ and see what happens. Yes, that may work. But it wasn't what was most natural to me. I evaluated the integral on Wolfram Alpha for a couple of specific values and noticed that it was always of the form $F(b)-F(a)$ where $b>a$ that just screamed use a double integral to me. • June 14th 2011, 05:48 PM Random Variable Re: improper integral #4 Quote: Originally Posted by Drexel28 Yes, that may work. But it wasn't what was most natural to me. I evaluated the integral on Wolfram Alpha for a couple of specific values and noticed that it was always of the form $F(b)-F(a)$ where $b>a$ that just screamed use a double integral to me. The substitution is not something I came up with. My automatic approach would be to find the Cauchy principal value using contour integration and then drop the P.V. label because the integral is convergent. But if you make that substitution you get $\int^{\infty}_{0} \frac{ \ln x}{(x+a)(x+b)} \ dx = \ln (ab) \int^{\infty}_{0} \frac{du}{(u+a)(u+b)} - \int^{\infty}_{0} \frac{ \ln u}{(u+a)(u+b)} \ du$ so $\int^{\infty}_{0} \frac{\ln x}{(x+a)(x+b)} \ dx = \frac{\ln (ab)}{2} \int^{\infty}_{0} \frac{du}{(x+a)(x+b)} \ dx$ EDIT: $= \frac{\ln a + \ln b}{2(a-b)} \int^{\infty}_{0} \Big(\frac{1}{x+b} - \frac{1}{x+a} \Big) \ dx = \frac{\ln a + \ln b}{2(a-b)} \ (\ln a - \ln b)$ $= \frac{\ln^{2} a - \ln^{2} b}{2(a-b)}$ • June 14th 2011, 07:41 PM Random Variable Re: improper integral #4 The same substitution can be used to evaluate $\int^{b}_{a} \frac{\ln x}{(x+a)(x+b)} \ dx$. All times are GMT -8. The time now is 10:53 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9100053310394287, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/2119/equivalent-spring-constant-for-infinite-square-grid-of-springs/2128
# Equivalent spring-constant for infinite square grid of springs Consider an infinite square grid, where each side of a square is a spring following Hooke's law, with spring constant $k$. What is the relation between the force and displacement between two points? If they are proportional, what is the equivalent spring constant between the origin and the point $(x,y)$ (integers) ? Edit 1: I also want to know this: Suppose you make the springs so small that this can be treated as a continuous sheet, at what speed will a wave propagate? Assuming a wave starting as an initial displacement perpendicular to the sheet. Given some initial state, is there an equation for the time-evolution of the continuous sheet? Edit 2: Suppose there is a mass at every node, and its $(x,y)$-coordinates is fixed, it only vibrates out of the plane. Consider that we take the continuous limit, such that we get a 2D membrane of mass density $\mu$. 1. Is the membrane isotropic? 2. Suppose we use another tiling (like hexagonal) before taking the continuous limit, will this sheet behaves the same way? 3. If not, but they are both isotropic, how does one characterize their difference, can they be made to behave the same way by changing the spring constant $k$? 4. What is the equation of motion for the square sheet with spring constant $k$? 5. What is the equation of motion for the square sheet if the springs obey a generalized Force law, $F=kx^n$, where $n$ is a variable. 6. What is the equation of motion for a 3D cubic grid? I am particularly interested in answers to 1., 2. and 3. I dont expect anyone to answer all these and will also accept an answer which does not explain anything but simply provides a good reference. - I'm not sure I understand this. If you just pick two points, you're not getting enough information. You need to know the displacements of every node connected to a given point in order to find the force on that point. – Mark Eichenlaub Dec 21 '10 at 13:52 1 This is an infinite dimensional problem. Each connection between springs is roughly one degree of freedom. You can analyse the equilibrium case, by assuming all springs must displace equally (thus reducing the nº of degrees to 1). – Bruce Connor Dec 21 '10 at 15:06 3 sniping physicists! – Jeremy Dec 21 '10 at 15:28 2 This looks like a model for a crystal except it can't be stable like this. You can continuously deform the angle between the sides of the square into a rhombus and eventually (letting the angle go to zero) obtain a one-dimensional model. Reasonable crystal models also add diagonal interaction into the square that makes this angle deformation disadvantageous (because you would make the diagonal spring twice longer if you'd collapse the square). – Marek Dec 21 '10 at 18:23 2 @Kalle43 You should simply your question, not complicate it. It is a very complex system you suggested. Reduce it to manageable limits, state assumptions, describe the grid, and you might get your answer. – Bruce Connor Dec 21 '10 at 19:47 show 12 more comments ## 3 Answers I'll answer only the third one (for now at least); the movement with limit to small vertical oscillations will be governed by the drum equation: $\ddot{s}(x,y)=c^2 \nabla^2 s(x,y)$ where $s(x,y)$ is a vertical displacement in point $(x,y)$ and $c$ is the weave speed; using dimensional analysis I would say that $c\sim\sqrt{\frac{k}{\sigma}}$, where $\sigma$ is the mass density. Of course everything is getting much more complex with larger amplitudes. - 1 I don't believe this is really correct. The wave equation is derived from the assumption that nodes oscillate around stable equilibrium positions. This is not the case in the spring model which can be also arbitrarily deformed (i.e. it is not rigid). – Marek Dec 21 '10 at 19:38 1 @marek please find a copy of Goldstein's mechanics. If it is one of the newer editions (with Poole and Safko) then have a look at Ch. 13 "Introduction to Lagrangian and Hamiltonian Formulations for Continuous Systems and Fields." If you find any inconsistencies in that treatment then please post a question because that would be big news. – user346 Dec 21 '10 at 20:16 2 @Marek This is small deformation approximation. – mbq♦ Dec 21 '10 at 20:36 1 @mbq: okay, but then it should be noted that it's not valid (at least not obviously) because you can always deform the square into a rhombus without expending energy in this model. While the small approximations are usually done around stable solutions that you can't deform in this way. – Marek Dec 21 '10 at 21:08 1 @Marek Small vertical deformation than. However, those horizontal motions in linear approximation will also have a speed proportional to $\sqrt{\frac{k}{\sigma}}$. – mbq♦ Dec 21 '10 at 22:49 show 4 more comments I stick to the first question. If you only do small displacements, and the two points are along the same line of springs then the effective spring rate is $$k_{eff} = \frac{k}{N}$$ where $N$ is the number of springs between the points. Why? Well split the problem like this ````(inf)---[k_out]---(A)---[k_in]---(B)---[k_out]---(inf) ```` where `(A)` and `(B)` are the two points, and the springs are replaces with the effective springs `[k_out]` between the points and infinity, and `[k_in]` between the two points. The formula for springs in series is $\frac{1}{k_{eff}} = \frac{1}{k_1}+\frac{1}{k_2}+\ldots$, or $k_{eff}=k/N$ if all the springs have the same rate. So `[k_out]` is zero because $N=\infty$ and whats left to consider is only the springs in-between the points. Note that the springs out of the line of the points are un-important for small displacements because they only contribute higher order non-linearities. Completely different equations are needed for the continious sheet. The wave speed has to do with the mass/density of the sheet also, not just the elasticity and the sitffness. - If on every node of the grid you have a small mass, then you have a model for a two dimensional solid. That would behave like a two dimensional membrane. The equation of motion for every disturbance would be a wave equation. In the case of an one-dimensional grid, the wave velocity for such a wave would be $$c^2=\frac{kl^2}{m}$$ where l is the distance between two neighbouring masses. In the case of the two dimensional grid, you will probably also have a geometric factor. Update: Regarding the last edit, if you have a tension that characterizes the membrane, then the velocity is the square root of the tension over the mass density. So the geometry of the thing would play some part both to the tension and the mass density. That is because if you change the shape of the cell, then you assign different surface for every mass and you also assign a different number of brunches with springs to every node thus changing the effective spring constant. These are my qualitative guesses. - 2 "then you have a model for a two dimensional solid" -> no you don't. See my comment under the question. In short: this model is not stable to perturbation that collapses it to 1D and so it is essentially just a 1D model. – Marek Dec 21 '10 at 19:34 1 How does that happen for an infinite grid? Anyway, that doesn't matter for the wave velocity. – Vagelford Dec 21 '10 at 20:09 1 ""then you have a model for a two dimensional solid" -> no you don't", yes you do @marek. I took a look at your comment and I failed to understand your argument. – user346 Dec 21 '10 at 20:11 1 @space_cadet: no you don't :-) @Vagelford: the same way it does for square. These things are best illustrated with a picture. All I can say is: think about the same model but with the elementary face being a rhombus instead of a square. You can continuously deform these models into each other. Therefore these can't be good models for a solid because those models need to be... well, solid :-) – Marek Dec 21 '10 at 21:13 1 Yes, but you would have to do it simultaneously for all the infinite grid, so a small perturbation can't do it. Anyway, I didn't say it is a good grid model. It is a good enough for small oscillations. – Vagelford Dec 21 '10 at 21:28 show 7 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399572610855103, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/120857/the-number-of-non-isomorphic-strongly-regular-graphs-on-n-vertices
## The number of non-isomorphic strongly regular graphs on n vertices ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is the number of strongly regular graphs on $n$ vertices? or at least how many non-isomorphic strongly regular graphs can exist? - Thank you for every one who answer to the question. – Mojtaba Jazaeri Feb 5 at 13:54 Thank you very much Professor Chris Godsil and professor Aaron Meyerowitz. – Mojtaba Jazaeri Feb 5 at 17:54 ## 1 Answer No formula is known. Since Latin squares and Steiner triple systems give strongly regular graphs, lower bounds on the numbers of these structure give lower bounds if $n$ is square or if $n=v(v-1)/6$ and $v\equiv1,3$ mod 6. (Presumably these lower bounds are very weak.) According to Brouwer's tables we have exact enumeration up to 36; the numbers on 37 and 41 are not known. (I expect that the number of srgs on $p$ vertices, $p$ prime increases with $p$, but this has not been proved.) Of course we do not know the number of isomorphism classes of graphs on $n$ vertices. We have a procedure that allows us to compute the number for moderate values of $n$, and we know that asymptotically the number is $2^{n(n-1)/2}/n!$. Given this, our knowledge for strongly regular graphs does not seem quite so bad. - It is worth mentioning that the lower bounds in some case, while perhaps weak, are still enormous. The number of $m \times m$ latin squares is known to be greater than $\frac{m!^{2m}}{m^{m^2}}$ so I suppose this would be for $n=m^2$ and there is the mater of dividing out $m!^3$ to account for isomorphism/isotopys. But that is still a big number and accounts for both $n$ and the degree. Pairs, triples, etc. of pairwise orthogonal squares give more possibilities. – Aaron Meyerowitz Feb 5 at 16:59 It's true that the lower bounds we get from (say) Latin squares are enormous, but they're probably a really long way from the truth. – Chris Godsil Feb 5 at 18:30 As we know, every strongly regular graph over prime number of vertices is a conference graph and Paley graph is a conference graph and the following sentence due to Willem H. HAEMERS: For v = 5, 9, 13 and 17, the Paley graph is the only one with the given parameters. If $v \geq 25$, other graphs with the same parameters exist. in the following paper: Matrices for graphs, designs and codes Why this sentence is true? if this sentence is true, then there are at least 2 non-isomorphic strongly regular graphs on prime number of vertices $p>25$. – Mojtaba Jazaeri Feb 9 at 18:52 1 Ted Spence determined all conference graphs on 29 vertices (see maths.gla.ac.uk/~es/srgraphs.php), there were 41 of them. I have not seen a proof that for any prime $p$ with $p\ge29$, there are at least two conference graphs on $p$ vertices. If you want to know what Haemers meant, you will have to ask him. – Chris Godsil Feb 9 at 19:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407991170883179, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=106277
Physics Forums Page 1 of 2 1 2 > ## Pressure of Fluid I have noticed what appears to be a geometric theory of pressure vessels. Its probably already been proved, so has anyone heard of it? If not, I may play around to see if I can get any math out of it to work. I have noticed that if you have any closed body with internal pressure, and you make a cut anywhere along the body, and separate the body so that its a new free body diagram, that the net force in the direction perpendicular to the cut, is always going to be equal to the pressure, times the cross sectional area where the cut was made. Its simliar to the Gauss' law about net flux in a way, except that the magnitude of the electric field goes down in Gauss' law, here the pressure force always is the same, even at the cut. Thanks guyz! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Blog Entries: 7 Recognitions: Gold Member Homework Help Science Advisor Hi Cyrus, it sounds like you're describing a statics problem wherein you're making a freebody diagram and finding the sum of the forces equals zero. Is there more to it than that? Recognitions: Science Advisor Cyrus, I agree with Q in that you are essentially doing a statics problem. There are a couple of situations where you do need to be careful: 1) Near ends. Your typical analysis is fine except for when you get close to the closed ends of the vessel. That's when things get a bit interersting. 2) If the walls are effectively considered "thin". IIRC, the criteria was approximately r/t >= 10. If this isn't the case, then the plane stress is tough to be justified. ## Pressure of Fluid Well, I already know the formula for radial stress of thick walled, and that of thin walled vessels. What im saying is that when you cut a vessel, in static equilibrium, they show that the pressure force in the x direction is the area of the cut plane, times the pressure force. But this is not directly obvious, because the pressure is acting normal to the surface around the sphere. But I did an easy proof to show that for a cylinder this is equal to the area of the cut plane times the pressure. What I'm saying is that this seems to be generally true no matter what general shape the walls of the cylinder are, that the pressure in the x direction will be equal to the area of the cut plane times the pressure. Quote by Cyrus ..... But this is not directly obvious, because the pressure is acting normal to the surface around the sphere. But I did an easy proof to show that for a cylinder this is equal to the area of the cut plane times the pressure. What I'm saying is that this seems to be generally true no matter what general shape the walls of the cylinder are, that the pressure in the x direction will be equal to the area of the cut plane times the pressure. Hi Cyrus! Would you mind giving me an idea of exactly how you did this proof? I'm doing thin and thick cylinders and its bothering me too.... Also, could anyone give the general proof that Cyrus was askking for? It'd be really helpful ... Not a proof, but an intuitive way to look at it. Have a flat plate supporting pressure. You know the force applied to its supports. Roughen the surface with some sandpaper. The force wouldn't change, even though the surface area may have just doubled, and the direction of most of the pressure force is no longer normal to the plate, but normal to the groove surfaces. To get all mathy, I'd try to define the total force as an integral of pressure and area and surface normals or something. Then I guess you can show this integral equals the cut area * pressure. Hmm, might get too messy for arbitrary geometries tho. HI Unrest, Here's my approach to what could be a simple mathematical proof...but there's something wrong....any idea what it is?...... According to the picture I've attached, the Pressure acts in radial lines on the surface of the cylinder....it has two components PcosA and PsinA.....while considering half of the crossection of the cylindrical pipe, the components PcosA cancel out....then we get the total vertical pressure on the half-crossesction of the pipe as integral(PsinA) within the limits 0 to pi...that gives 2P, not P.....also, if we use this method to analyse a full crossesction of the pipe (considering the uppur half of the pipe also), then all the components PcosA aswell as PsinA will get cancelled ans there'll be no resultant pressure!! Here's the pic Attached Thumbnails Pressure is a scalar, force is a vector. You don't resolve pressure. Notice the original posts were about the resultant force of the pressure. Oh gosh, small details can hit back big time!! Thanks, Studiot.... ...Okay, so could anyone give me a hint as to how to prove this thing now? Quote by Urmi Roy HI Unrest, (considering the uppur half of the pipe also), then all the components PcosA aswell as PsinA will get cancelled ans there'll be no resultant pressure!! It sounds reasonable, but as studiot said, you're calling force pressure. If you call it force, then it's fine. The half pipe gets only a net downward force, and the complete pipe gets zero net force. You'd expect zero net force on the complete pipe because it doesn't get pushed sideways. But it doesn't quite count as a proof because it only works for a cylindrical pipe. You might as well use a square pipe and it'd be even easier! Hmm, okay, but then what is the proof? I want to say: S is the surface of the half pipe, or whatever pressure vessel $$\textbf{n}$$ is the normal vector p is pressure Total force = p $$\int_{S} \textbf{n}dS$$ = independent of the shape of S, as long as the edge is fixed. I'm sure there must be a theorum (Something like Stokes?) that says this is true. If the surface is closed then there must also be a theorum saying that this surface integral over a closed surface is zero. In that case it would be a special case of the proof of Archimede's principle for bouyancy force. Here the pressure is constant, but with bouyancy the pressure varies with depth. OKay...but we were trying to prove that the net effect of pressure in any direction will be equal to the area of the cut plane times the pressure... like what Cyrus said.... Yea I understand that. I just don't quite have my 'proof' complete. The integral I showed should be the net force caused by pressure on an arbitrarily shaped surface S (such as your half pipe). So if you can prove that it's equal to the integral over any other surface with the same boundary, then you can choose to use the flat surface directly across the cut. Writing it this way takes the details of the physics out of the picture. It's just the integral of the normal vectors of a surface. It seems intuitive that that should be zero for a closed surface, and for an open surface (like our case), it should be independent of the shape. I suspect you can use the same kind of theorum as for proving the Archimedes principle, as well as Gauss's law as Cyrus said. I think I'm just about there. This other thread says $$\mathbf{F} = \int_{surface}p(\mathbf{-n})\,dA = 0$$ That means the total force caused by pressure on a closed surface is zero. That itself needs to be proved, but should be a simpler problem. Imagine the pressure vessel is a coke bottle with the cap on and pressure inside. Assuming the equation above is true, the total force on the inside surface is zero. But we know the force on the cap is Fcap = p*Acap. That means the total force on every other part of the bottle must be Fbottle = 0 - Fcap = -p*Acap So that's it! The magnitude of the total force on the bottle, excluding the cap, is p*Acap. Recognitions: Science Advisor You don't need to tie yourself in knots with surface integrals. Just cut the vessel and the fluid inside it into two parts with a plane. Draw a free body diagram for one part. The vessel was not moving, so the total force on the cut surface = 0. The total force on the cut surface = the force caused by pressure in the fluid + the force caused by stress in the vessel. Force caused by pressure = pressure x area Force cause by stress = mean stress x area Cut area of vessel = circumference x wall thickness, approximately. If the cut surface is a circle $$\pi r^2 P = 2 \pi r t \sigma$$ $$\sigma = P r / 2t$$ Page 1 of 2 1 2 > Thread Tools | | | | |----------------------------------------|----------------------------------------------|---------| | Similar Threads for: Pressure of Fluid | | | | Thread | Forum | Replies | | | Engineering, Comp Sci, & Technology Homework | 3 | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 12 | | | Introductory Physics Homework | 4 | | | Introductory Physics Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9470508098602295, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/12411/is-it-valid-to-select-a-model-based-upon-auc
Is it valid to select a model based upon AUC? I have plot ROC for several models. These models were used to classify my samples into 2 classes. Using these commands, I can obtain sensitivity vs. specificity plots for each model: ````perf <- performance(pred, "sens", "spec") plot(perf) ```` Should I rely on the area under the curve (AUC) for each model to conclude which model is better? Other than AUC, should we consider other results so as to conclude which model is better? If yes, how to get AUC with R? Am I right in assuming that "the smaller it is the better is the classification power of the model?" - 1 – chl♦ Jun 27 '11 at 18:38 I don't see what ROC curves, sensitivity, or specificity have to do with the problem at hand. – Frank Harrell Jun 28 '11 at 8:20 @Frank 1. this is a comment to the question, not an answer. 2. AUC, the area under a ROC-Curve is an acceptable way of measuring the predictive power of a classification-model and hence a measure which can be used in the process of model - selection. The ROC itself visualizes the AUC and allows the calculation of a decision threshold. – steffen Jun 28 '11 at 8:29 2 Right, should have entered this as a comment. The AUC is a useful summary measure once you've finished fitting the model. It is not the ideal objective criterion for selecting the model (likelihood should be used for that). It does not provide a rational basis for a decision threshold as it assumes that utilities are data driven instead of subject driven. – Frank Harrell Jun 28 '11 at 10:34 1 @Frank I see, maybe we have a term problem. 1. For me as ML, model=anything which predicts the response and modelselection=choose the model whose predictive power is the best (as long as the complexity is not too high etc.) 2. I do not get the point regarding the decision threshold: If one calculates a weighted ROC on basis of a cost-benefit matrix one can determine the optimal decision threshold (bound to the model, of course)... for example for the task of direct marketing (mailing). – steffen Jun 28 '11 at 10:52 show 2 more comments 3 Answers AUROC is one of many ways of evaluating the model -- in fact it judges how good ranking (or "sureness" measure) your method may produce. The question whether to use it rather than precision recall or simple accuracy or F-measure is only depending on a particular application. Model selection is a problematic issue on its own -- generally you should also use the score you believe fits application best, and take care that your selection is significant (usually it is not and some other factors may be important, like even computational time). About AUC in R -- I see you use `ROCR`, which makes nice plots but it is also terribly bloated, thus slow and difficult in integration. Try `colAUC` from `caTools` package -- it is rocket fast and trivial to use. Oh, and bigger AUC is better. - As, mbq wrote, the answer to whether you should use AUC depends on what you are trying to do. Two points that are worth considering: AUROC is insensitive to changes in class distribution. It places even emphasis on the different classes, which means it can poorly reflect an algorithm's performance if there is a big imbalance in the distribution of classes. On the other hand, if you are more interested in identifying characteristics of the classes rather than their prevalence, this is a strength. AUROC does not capture the different costs of different outcomes and it is seldom the case that you care equally about false positives and false negatives. I find AUROC sensible. The curves easy to read: they are like an intuitive version of a confusion matrix. But it is important to know what we're reading and what's left off. See also: Evaluating and combining methods based on ROC and PR curves - As you are using ROCR, you can get the point of the ROC curve that maximizes the area and use this to determine the corresponding threshold: ````my_prediction <- predict.gbm(object = gbm_mod, newdata = X, 100) pred <- prediction(my_prediction, Y) perf <- performance(pred, 'tpr', 'fpr') r <- rev((as.data.frame([email protected])*(1-as.data.frame([email protected])))[,1]) threshold <- as.data.frame([email protected])[which(r==max(r)),1][1] ```` You can think of this optimization simply as the point that makes the largest possible rectangle under the ROC curve. - 1 Although this is correct and helpful in general, frankly, I do not see how this answers any of the OP's questions. Aside, it seems to me that the call to "rev" should be removed. – steffen Nov 6 '12 at 8:54 Oh, I forgot: Welcome to the site :). – steffen Nov 6 '12 at 9:01 Hmm, after re-reading the question, I thought he was asking how to get the optimal threshold for classification, when he was just asking how to use ROC/AUC. Oh well, but you need the rev() because y*(1-x) "reverses" the x axis. – cdgore Nov 7 '12 at 5:43 As far as I see, the OP asked how to select a model based on AUC (and whether this is ok). When determining a threshold solely based on the curve, one can also stick to the AUC. I applied your function on `data(ROCR.simple)` and found that with and without rev the result is correct, but without rev I get a threshold closer to the optimal point in roc-curve (0,1). – steffen Nov 7 '12 at 9:59 Please see earlier comments. AUC and ROC should play no role. The likelihood function plays a role for a reason, and there are generalized $R^2$ measures based on log likelihood. – Frank Harrell Nov 12 '12 at 20:50 default
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259705543518066, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/6633/check-homework-integration-in-mathematica/6634
Check homework integration in Mathematica Given in my homework I have to compute (by hand) $$\iint\limits_{x^2+y^2\leq 1}(x^2+y^2)\,\mathrm dx\,\,\mathrm dy.$$ My solution so far: Let $f(x,y)=x^2+y^2$ and $K=\{(x,y)\in\mathbb{R}^2:x^2+y^2\leq 1\}$. With the transformation of polar coordinates $\varphi:(r,\phi)\mapsto(r\cdot\cos\phi,r\cdot\sin\phi)=(x,y)$ and the determinant of the Jacobian matrix $|\det D_\varphi|=r$ we can rewrite the set as $K=\{(r\cdot\cos\phi,r\cdot\sin\phi):r\in[0,1],\phi\in[0,2\pi]\}$ and our integrand too as $f(x,y)=x^2+y^2=r^2$. $$\iint\limits_K f(x,y)\,\mathrm dx\,\mathrm dy=\int\limits_0^1\int\limits_0^{2\pi}r^2\cdot |\det D_\varphi|\,\mathrm d\phi\,\mathrm dr=2\pi\int\limits_0^1r^3\,\mathrm dr=\left.2\pi\cdot\frac{1}{4}r^4\right|_0^1=\frac{\pi}{2}.$$ I now want to check the result with a CAS. I am pretty new to Mathematica so I just didn't found any clue on how to enter such an integral for computation. Any help out there? - 1 Answer In Mathematica: `Integrate[Integrate[x^2 + y^2, {x, -Sqrt[1 - y^2], Sqrt[1 - y^2]}], {y, -1, 1}]` Or, shorter: `Integrate[x^2 + y^2, {y, -1, 1}, {x, -Sqrt[1 - y^2], Sqrt[1 - y^2]}]` The main trick is to calculate the bound on $x$ based on the current value of $y$, which is what you need to make the integration bounds explicit. Indeed, $x_{max}=\sqrt{1-y^2}$. This is something you can do in most integral-calculating math software. You can also define the region implicitly, see this. For this specific problem, that would give: `Integrate[(x^2 + y^2) Boole[x^2 + y^2 <= 1], {y, -100, 100}, {x, -100, 100}]` Where the "100" bounds are just to limit the computation. However, Mathematica is even smart enough to calculate: ```Integrate[(x^2 + y^2) Boole[x^2 + y^2 <= 1], {y, -Infinity, Infinity}, {x, -Infinity, Infinity}]``` - Rather than writing this particular problem as an iterated integral, I wonder if it could be written as a "pure" double integral. Not all double integrals can be simply written as iterated integrals. – Ragib Zaman Jun 7 '12 at 12:27 The last version with `Integrate[(x^2 + y^2) Boole[x^2 + y^2 <= 1], {y, -Infinity, Infinity}, {x, -Infinity, Infinity}]` is awesome. I will stick to the Boole function. – Christian Ivicevic Jun 7 '12 at 12:34 You're indeed supposed to be using infinite limits when using the `Boole[]` form of the integral; the assumption is that the integrand is zero outside the boundary described within the `Boole[]` function, so it all works out. – J. M.♦ Jun 10 '12 at 11:48 For checking a paper-and-pencil evaluation, definitely the method using `Boole` is the way to go: that way, if you incorrectly described the limits on `x` in terms of `y` (or vice versa) by hand, you wouldn't repeat the same error when doing the Mathematica evaluation. – murray Jun 10 '12 at 20:26 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184131622314453, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/187702-derivative-integral-print.html
# Derivative of an integral Printable View • September 10th 2011, 07:12 AM flybynight Derivative of an integral Hey guys, I am looking at this problem, and I don't have any idea how to do it: $\frac{d}{dx}\int_x^3e^{{-t}^2}dt$ Any help would be greatly appreciated. Thanks, Peter • September 10th 2011, 07:25 AM Plato Re: Derivative of an integral Quote: Originally Posted by flybynight Hey guys, I am looking at this problem, and I don't have any idea how to do it: $\frac{d}{dx}\int_x^3e^{{-t}^2}dt$ Suppose that each of $g~\&~h$ is s differentiable function then $\frac{d}{{dx}}\int_{h(x)}^{g(x)} {f(t)dt} = g'(x)f(g(x)) - h'(x)f(h(x))$. All times are GMT -8. The time now is 03:08 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293807744979858, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/93295?sort=newest
## Separating vectors for C$^*$-algebras ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) (I asked this on math.stackexchange, without response). Let $A$ be a C$^*$-algebra, concretely acting on a Hilbert space $H$. Suppose that $\xi_0\in H$ is cyclic and separating for $A$ (that is, the map $A\rightarrow H, a\mapsto a(\xi_0)$ is injective with dense range). Let $M=A''$ the von Neumann algebra generated by $A$. Need $\xi_0$ still be separating for $M$? That is, $x\in M, x(\xi_0)=0 \implies x=0$? It is standard (and easy to prove) that this is equivalent to $\xi_0$ be cyclic for $M'$. However, the usual proof breaks down, and does not show this to be equivalent to $\xi_0$ being separating for $A$. I think I can prove this using left Hilbert algebras. We turn `$\mathfrak A = \{ a(\xi_0) : a\in A \}$` into a left Hilbert algebra algebra in the obvious way. Then run the Tomita-Takesaki machinery (actually not needed in full generality as we start with a state, not a weight). Then the von Neumann algebra generated by $\mathfrak A$ is nothing but $M$, and so the general theory tells us that $\varphi(x) = \|x\xi_0\|$ will be a faithful weight on $M$, which is what we need. Actually, it's not at all clear to me that this is correct-- I don't see why the map $S:\mathfrak A \rightarrow \mathfrak A; a\xi_0 \mapsto a^*\xi_0$ is preclosed. So now I suspect there might be a counter-example... - ## 1 Answer The answer is yes for trace vectors, but no in general. Take a closed nowhere dense subset $C\subset[0,1]$ with positive measure and consider the state $\phi$ on $C([0,1],M_2)$ defined by `$$\phi(f)=\int_C f(x)_{11}\, dx + \int_{[0,1]\setminus C}\mathrm{tr}f(x)\,dx.$$` Here $f(x)_{11}$ is the $(1,1)$-entry of $f(x) \in M_2$. Then, $\phi$ is a faithful state, and the GNS vector is a cyclic separating vector. However it is not separating for the double commutant (which is $L^\infty([0,1],M_2)$). - I took the liberty of correcting the LaTeX. This looks good for me! Many thanks... – Matthew Daws Apr 7 2012 at 6:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329742789268494, "perplexity_flag": "head"}
http://mathoverflow.net/questions/44876/lipschitz-properties-of-minima-minimizers-of-convex-functions-of-two-variables
## Lipschitz properties of minima/minimizers of convex functions of two variables ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose I have a function $f(x,y)$ from $\mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ that is convex in both $x$ and $y$. Set $g(y) = \min_{x} f(x,y)$ What I would like is for $g(y)$ to be Lipschitz: $|g(y) - g(y')| \le c \cdot \| y - y' \|$ Unfortunately, $f(x,y)$ may have a very poor Lipschitz constant for general $x$. Are there general conditions on $f$ for which the minima are Lipschitz? Alternatively, when can we say the minimizer $x^{\ast}(y) = \arg \min_x f(x,y)$ is Lipschitz in $y$? I've tried looking in a few convex optimization books for answers, but no luck. - At least for "the" minimizer $x^*(y)$ the situation can be complicated: Consider something like $f(x,y) = \max(|x|-y,0)$. Here $x^*(0)=0$ but for $y<0$, the set of minimizers is an interval. – Dirk Nov 5 2010 at 4:45 I'm perfectly willing to assume strict convexity (or even strong convexity). In the end I'm interested in "for what kinds of functions can we expect the minimizer (or minimum value) to have small Lipschitz constant." I have a particular function in mind for an application, but this problem seemed more general so I thought I would ask it here. – Anand Sarwate Nov 5 2010 at 5:16 ## 1 Answer I encountered the same problem three years ago and found some relevant literature. Here are a few. See also the refs therein. Lipschitz Behavior of Solutions to Convex Minimization Problems. Jean-Pierre Aubin, Mathematics of Operations Research, Vol. 9, No. 1. (Feb., 1984), pp. 87-111. Lipschitz continuity of solutions of linear inequalities, programs and complementarity problems. O. L. MANGASARIAN and T.-H. SHIAU. SIAM J. CONTROL AND OPTIMIZATION, 25(3), 1987. Lipschitz Continuity of Solutions of Variational Inequalities with a Parametric Polyhedral Constraint. N. D. Yen, Mathematics of Operations Research, Vol. 20, No. 3. (Aug., 1995), pp. 695-708. On Lipschitzian Stability of Optimal Solutions of Parametrized Semi-Infinite Programs. Alexander Shapiro, Mathematics of Operations Research, Vol. 19, No. 3. (Aug., 1994), pp. 743-752. SHARP LIPSCHITZ CONSTANTS FOR BASIC OPTIMAL SOLUTIONS AND BASIC FEASIBLE SOLUTIONS OF LINEAR PROGRAMS. Wu Li, SIAM J. CONTROL AND OPTIMIZATION Vol. 32, No. I, pp. 140-153, January 1994 - Thanks a bunch! This is the direction I was missing before. I don't know why "Lipschitz convex minimization" failed to find these references earlier. Google has failed me. – Anand Sarwate Nov 6 2010 at 15:56 it is also related to those sensitivity analysis in convex optimization. see for example chap. 5 of luenberger's red book. – mr.gondolier Nov 6 2010 at 20:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8922713398933411, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/203823-taylor-series.html
# Thread: 1. ## Taylor Series Use taylor series to verify $e^{-iz}= cos z - isinz$ i know cos z = $\sum_{n=0}^{\infty}\frac{(-1)^nz^{2n}}{2n!}$ and sin z = $\sum_{n=0}^{\infty}\frac{(-1)^nz^{2n+1}}{2n+1!}$ and $e^{-iz} =\sum_{n=0}^{\infty}\frac{(-iz)^n}{n!}$ but i cant get the steps to get there. 2. ## Re: Taylor Series The sum for $\cos$ runs over even terms, and the corresponding one for $\sin$ of odd terms. So you just have to check that the coefficients of $e^{-iz}$ for odd/even indexes match. 3. ## Re: Taylor Series Start with the $e^{-iz}$ series. Break it down by what happens to that $(-i)^n$ factor. That will break the $e^{-iz}$ series into the sum of four series via the values of n mod 4. Collect real and imaginary terms. Simpify for the real and imaginary parts (this takes some care - gotta manipulate the series and their indices and such carefully). That'll do it. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8674055933952332, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/tagged/oblivious-transfer
# Tagged Questions Oblivious transfer refers to a cryptographic protocol in which a sender possesses a set of data and a receiver queries the sender for a particular member of that set in such a way that the sender does not learn which member the receiver requests. In other words, the sender is oblivious of what she ... 3answers 105 views ### Are there any differences between PIR, oblivious transfer and differential privacy? I am trying to make a taxonomy of different purposes of some cryptographic protocols. Generally speaking the purpose of PIR, oblivious transfer and differential privacy it sounds like being invented ... 0answers 63 views ### Is it possible to build an unfair noisy channel from 1 out of 2 oblivious transfer [closed] For a fair channel, sender sends a bit b and receiver gets it with probability 1/2 and gets b's flipped value with probability 1/2. It is trivial to build a fair noisy channel from 1 out of 2 ... 1answer 125 views ### Randomized Oblivious Transfer If we define Oblivious Transfer as following: Alice inputs $(x_0,x_1) \in F^2$, where $F$ is a field, and Bob inputs $b\in\{0,1\}$, then Alice gets a dummy output(for which she knows nothing about b), ... 1answer 108 views ### Is there a group of prime order which could fit the CT-Computational Diffie-Hellman assumption? I'm trying to choose a group that is hard under the Chosen-Target Computational Diffie-Hellman assumption, according to the definition in this paper, in order to implement the oblivious transfer ... 2answers 247 views ### Why use a 1-2 Oblivious Transfer instead of a 1 out of n Oblivious Transfer? When initiating an oblivious transfer, why would someone use a 1-2 oblivious transfer rather than going for an 1 out of n oblivious transfer? Perhaps a slight time overhead for the extra message ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259548783302307, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/289542/how-do-i-find-the-mle-of-theta-when-x-is-dependent-on-theta?answertab=active
# How do I find the MLE of $\theta$ when x is dependent on $\theta$? Let $X_{1},X_{2},...,X_{n}$ represent a random sample from a distribution with pdf: $f(x; \theta)=e^{-(x-\theta)}, \theta \le x<\infty, -\infty<\theta<\infty$ | zero elsewhere I need to find the MLE $\hat {\theta}$ of $\theta$. Since the support space of the pdf is dependent on $\theta$, do I need to express the pdf in terms of an indicator function? i.e. $f(x; \theta)=e^{-(x-\theta)}I_{(\theta,\infty)}(x)$ If so, do I find the MLE in the standard manner? i.e. $L(x;\theta)=\displaystyle \prod^{n}_{i=1} f(X_{i};\theta)=e^{-(\sum^{n}_{i=1}X_{i}-n\theta)}I_{(\theta,\infty)}(X_{(1)})$ $\ln L(x;\theta)=-\displaystyle \sum^{n}_{i=1} X_{i} +n\theta +\ln I_{(\theta,\infty)}(X_{(1)})$ The next step would be to take the partial derivative of the log-likelihood function with respect to $\theta$, but how would I find the partial derivative of the indicator function? Am I approaching this question in the correct manner? Any help would be greatly appreciated! - You can't write the log-likelihood like this. Take a look at your likelihood function instead. Which value of $\theta$ would maximize this function? – Patrick Li Jan 29 at 5:51 Partial derivatives are not the only way to maximize a function. Try to rewrite $L$ as $L(x;\theta)=c\mathrm e^{n\theta}\mathbf 1_{\theta\lt y}$ and to think about the shape of the function $\theta\mapsto c\mathrm e^{n\theta}\mathbf 1_{\theta\lt y}$. – Did Jan 29 at 8:43 ## 1 Answer Note that the likelihood function is $$\begin{align}L(\theta |x) &=e^{-(\sum^{n}_{i=1}x_{i}-n\theta)},x_i \geq \theta,\forall i \\ &=e^{-\sum^{n}_{i=1}x_{i}}\cdot e^{n\theta},x_{(1)} \geq \theta \end{align}$$ Now note that $L(\theta |x)$ is maximum iff $e^{-\sum^{n}_{i=1}x_{i}}\cdot e^{n\theta}$ is maximum subject to the restriction $x_{(1)} \geq \theta$ iff $e^{n\theta}$ is maximum subject to the restriction $x_{(1)} \geq \theta$ iff $\theta =x_{(1)}$. Here Your MLE is $x_{(1)}=min(x_1,\dots ,x_n)$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9108179211616516, "perplexity_flag": "head"}
http://mathoverflow.net/questions/74621/invariants-and-base-change/74624
## Invariants and base change ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose $R$ is a Noetherian commutative ring, and $M$ a finite free $R$-module, with an action of a finitely generated discrete group $G$ by $R$-linear maps. Is there any homological condition on this data which would ensure that taking $G$-invariants commutes with base change? That is, for any finite type map $R \rightarrow S$, we have $$(M \otimes S)^G = (M^G)\otimes S.$$ For instance, suppose $M^G = 0$? - Does $G$ operate on $M \otimes S$ via $g \cdot (m \otimes s) = (gm) \otimes s$ ? – Ralph Sep 5 2011 at 21:48 3 I imagine we're assuming trivial action on the coefficients. It took me a moment to realize that we do need a condition: take $G={\mathbb Z}/2$, and let it act on $M={\mathbb Z}$ by multiplication by $-1$. Then $M^G$ over $R={\mathbb Z}$ is zero. However, the invariants over $S={\mathbb Z}/2{\mathbb Z}$ are all of $M\otimes S$. – Graham Denham Sep 5 2011 at 22:17 ## 2 Answers Under the assumption, that $G$ operates as guessed in my comment, one can proceed as follows: First note, that the conditions make $M$ a $RG$-module. For every $RG$-module $N$, it holds: $$N^G \cong Hom_R(R,N)^G\cong Hom_{RG}(R,N)= Ext^0_{RG}(R,N) =: H^0(G;N)$$ with natural $R$-module isomorphisms. Since $M$ is $R$-torsion-free, the universal coefficient theorem applies, yielding a short exact sequence of $R$-modules (that's where to assumption about the $G$-operation on the tensor product comes into play): $$0 \to H^0(G;M) \otimes_R S \to H^0(G;M \otimes_R S) \to Tor_1^R(H^1(G;M),S) \to 0.$$ Thus $M^G \otimes S = (M \otimes S)^G$ is equivalent to the vanishing of the Tor-term. Hence we have as a homological criterion: $$Tor_1^R(H^1(G;M),S) = 0 \text{ for every } R \to S.$$ A sufficient condition is therefore: $H^1(G;M)$ is a flat $R$-module. Note: $G$ can be an arbitrary group, the finitely generated assumption isn't used. Edit: In order to apply the universal coefficient theorem (UCT), one has to require $R$ to be hereditary and $M$ to be a projective $R$-module (if $M$ is supposed to be a finitely generated projective $R$-module, it sufficies if $R$ is semi-hereditary). Remark 1: Hereditary means that submodules of projective modules are again projective; semi-hereditary means that submodules of finitely generated projective modules are again projective. For instance, Dedekind domains are hereditary and Prüfer domains are semi-hereditary. Remark 2: For the convinience of the reader let me include the way, I use UCT: According to Weibel's homological algebra book (Theorem 3.6.1): If $P$ is a chain complex of flat $R$-modules such that for each $n$, $d(P_n)$ is a flat submodule of $P_{n-1}$, then the following sequence is exact for every $R$-module $S$: $$0 \to H_n(P) \otimes_R S \to H_n(P \otimes_R S) \to Tor_1^R(H_{n-1}(P),S) \to 0.$$ Now, assume $M$ is a projective $R$-module, $X$ is a free resolution of $R$ over $RG$ such that each $X_n$ is a free $RG$-module of finite rank (for example one can take the bar resolution) and set $P := Hom_{RG}(X,M)$. Then, as $R$-modules: $$P_n \cong Hom_{RG}((RG)^k,M) \cong \oplus_1^k Hom_{RG}(RG,M) \cong M^k$$ is a projective, thus flat $R$-module and by hereditary the conditions of UCT are fullfilled. Futhermore, one has to note that $$Hom_{RG}(X,M) \otimes_R S \cong Hom_{RG}(X,M \otimes_R S)$$ holds, since $X_n$ is a free $RG$-module of finite rank and my assumption about the $G$-operation on the tensor product (!). - 3 The ideas sounds fine, but since $R$ isn't necessarily a PID, you need to be careful with universal coefficients. I suspect you want to $Tor_i(H^i(G,M),S)=0$ to get the spectral sequence to collapse... – Donu Arapura Sep 6 2011 at 2:44 Thanks for your hint. In fact I was a little bit careless about the assumptions needed to apply UCT. – Ralph Sep 6 2011 at 7:51 I'm sorry, to dig this up, but you show here that $H^0(G;M)\otimes_R S \rightarrow H^0(G;M\tensor_R S)$ is always injective. Is that true without the higher vanishing of the $Tor$-groups mentioned in Donu Arapura's comment? – tkr Oct 29 at 22:58 As explained in the "Edit"-part, I assume in addition to the assumptions of the OP that $R$ is hereditary. – Ralph Oct 30 at 9:17 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Choose generators $g_1,\dots,g_n$ of $G$ and let $f\colon M\to M^n$, $f(m)=(g_im-m)_{i=1,\dots,n}$. This gives an exact sequence $$0\to M^G\to M\to M^n\to\mathrm{coker}(f)\to0$$ which can be interpreted (EDITED) as part of a flat resolution of $\mathrm{coker}(f)$. Since $\ker(f\otimes1_S)=(M\otimes S)^G$, the quotient $(M\otimes S)^G/\mathrm{im}(M^G\otimes S)$ may be identified with $\mathrm{Tor}_1(\mathrm{coker}(f),S)$, so they both vanish for all $S$ iff $\mathrm{coker}(f)$ is flat. If $\mathrm{coker}(f)$ is flat, we also get that $M^G\otimes S\to(M\otimes S)^G$ is injective, so we get the equivalence $M^G\otimes S\to(M\otimes S)^G$ bijective iff $\mathrm{coker}(f)$ is flat. - Needs attention to hypotheses, again. $M$ is free over $R$, but $M^G$ need not be flat, I think, unless (say) $R$ is a PID. – Graham Denham Sep 6 2011 at 20:21 $\mathrm{coker}(f)$ is just $H^1(G;M)$ in disguise. This follows easily from [Hilton, Stammbach: A Course in Homological Algebra] VI.4 equation (4.5). I also agree with Graham's concern about the flatness of $M^G$. Nevertheless, nice explication (+1). – Ralph Sep 6 2011 at 22:26 I guess I got confused by the fact that $M^G$ is flat in the case we are trying to characterize. My problem with using $f_1\colon M\to\mathrm{Hom}_G(IG,M)$ (so that $\mathrm{coker}(f_1)=H^1(G,M)$) is that I do not see why the kernel of $M\otimes S\to\mathrm{Hom}_G(IG,M)\otimes S$ should still be $(M\otimes S)^G$. – a-fortiori Sep 7 2011 at 6:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183613061904907, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Fermi-Dirac_statistics
# Fermi–Dirac statistics (Redirected from Fermi-Dirac statistics) Statistical mechanics • NVE Microcanonical • NVT Canonical • µVT Grand canonical • NPH Isoenthalpic–isobaric • NPT Isothermal–isobaric • µVT Open statistical Models Scientists In quantum statistics, a branch of physics, Fermi–Dirac statistics describes distribution of particles in a system comprising many identical particles that obey the Pauli exclusion principle. It is named after Enrico Fermi and Paul Dirac, who each discovered it independently, although Enrico Fermi defined the statistics earlier than Paul Dirac.[1][2] Fermi–Dirac (F–D) statistics applies to identical particles with half-odd-integer spin in a system in thermal equilibrium. Additionally, the particles in this system are assumed to have negligible mutual interaction. This allows the many-particle system to be described in terms of single-particle energy states. The result is the F–D distribution of particles over these states and includes the condition that no two particles can occupy the same state, which has a considerable effect on the properties of the system. Since F–D statistics applies to particles with half-integer spin, these particles have come to be called fermions. It is most commonly applied to electrons, which are fermions with spin 1/2. Fermi–Dirac statistics is a part of the more general field of statistical mechanics and uses the principles of quantum mechanics. ## History Before the introduction of Fermi–Dirac statistics in 1926, understanding some aspects of electron behavior was difficult due to seemingly contradictory phenomena. For example, the electronic heat capacity of a metal at room temperature seemed to come from 100 times fewer electrons than were in the electric current.[3] It was also difficult to understand why the emission currents, generated by applying high electric fields to metals at room temperature, were almost independent of temperature. The difficulty encountered by the electronic theory of metals at that time was due to considering that electrons were (according to classical statistics theory) all equivalent. In other words it was believed that each electron contributed to the specific heat an amount on the order of the Boltzmann constant k. This statistical problem remained unsolved until the discovery of F–D statistics. F–D statistics was first published in 1926 by Enrico Fermi[1] and Paul Dirac.[2] According to an account, Pascual Jordan developed in 1925 the same statistics which he called Pauli statistics, but it was not published in a timely manner.[4] According to Dirac, it was first studied by Fermi, and Dirac called it Fermi statistics and the corresponding particles fermions.[5] F–D statistics was applied in 1926 by Fowler to describe the collapse of a star to a white dwarf.[6] In 1927 Sommerfeld applied it to electrons in metals[7] and in 1928 Fowler and Nordheim applied it to field electron emission from metals.[8] Fermi–Dirac statistics continues to be an important part of physics. ## Fermi–Dirac distribution For a system of identical fermions, the average number of fermions in a single-particle state $i$, is given by the Fermi–Dirac (F–D) distribution,[9] $\bar{n}_i = \frac{1}{e^{(\epsilon_i-\mu) / k T} + 1}$ where k is Boltzmann's constant, T is the absolute temperature, $\epsilon_i \$ is the energy of the single-particle state $i$, and μ is the total chemical potential. At zero temperature, μ is equal to the Fermi energy plus the potential energy per electron. For the case of electrons in a semiconductor, $\mu\$ is typically called the Fermi level or electrochemical potential.[10][11] The F–D distribution is only valid when the fermions do not significantly interact with each other, so that the addition of a fermion does not disrupt the values of $\epsilon_i \$. Since the F–D distribution was derived using the Pauli exclusion principle, which allows at most one electron to occupy each possible state, a result is that $0 < \bar{n}_i < 1$ .[12] • Fermi–Dirac distribution • Energy dependence. More gradual at higher T. $\bar{n}$ = 0.5 when $\epsilon \;$ = $\mu \;$. Not shown is that $\mu \$ decreases for higher T.[13] • Temperature dependence for $\epsilon > \mu \$ . (Click on a figure to enlarge.) ### Distribution of particles over energy Fermi function F($\epsilon \$) vs. energy $\epsilon \$, with μ = 0.55 eV and for various temperatures in the range 50K ≤ T ≤ 375K. The above Fermi–Dirac distribution gives the distribution of identical fermions over single-particle energy states, where no more than one fermion can occupy a state. Using the F–D distribution, one can find the distribution of identical fermions over energy, where more than one fermion can have the same energy.[14] The average number of fermions with energy $\epsilon_i \$ can be found by multiplying the F–D distribution $\bar{n}_i \$ by the degeneracy $g_i \$ (i.e. the number of states with energy $\epsilon_i \$ ),[15] $\begin{alignat}{2} \bar{n}(\epsilon_i) & = g_i \ \bar{n}_i \\ & = \frac{g_i}{e^{(\epsilon_i-\mu) / k T} + 1} \\ \end{alignat}$ When $g_i \ge 2 \$, it is possible that $\ \bar{n}(\epsilon_i) > 1$ since there is more than one state that can be occupied by fermions with the same energy $\epsilon_i \$. When a quasi-continuum of energies $\epsilon \$ has an associated density of states $g( \epsilon ) \$ (i.e. the number of states per unit energy range per unit volume [16]) the average number of fermions per unit energy range per unit volume is, $\bar { \mathcal{N} }(\epsilon) = g(\epsilon) \ F(\epsilon)$ where $F(\epsilon) \$ is called the Fermi function and is the same function that is used for the F–D distribution $\bar{n}_i$,[17] $F(\epsilon) = \frac{1}{e^{(\epsilon-\mu) / k T} + 1}$ so that, $\bar { \mathcal{N} }(\epsilon) = \frac{g(\epsilon)}{e^{(\epsilon-\mu) / k T} + 1}$ . ## Quantum and classical regimes The classical regime, where Maxwell–Boltzmann (M–B) statistics can be used as an approximation to F–D statistics, is found by considering the situation that is far from the limit imposed by the Heisenberg uncertainty principle for a particle's position and momentum. Using this approach, it can be shown that the classical situation occurs if the concentration of particles corresponds to an average interparticle separation $\bar{R}$ that is much greater than the average de Broglie wavelength $\bar{\lambda}$ of the particles,[18] $\bar{R} \ \gg \ \bar{\lambda} \ \approx \ \frac{h}{\sqrt{3mkT}}$ where $h$ is Planck's constant, and $m$ is the mass of a particle. For the case of conduction electrons in a typical metal at T=300K (i.e. approximately room temperature), the system is far from the classical regime since $\bar{R} \approx \bar{\lambda}/25$ . This is due to the small mass of the electron and the high concentration (i.e. small $\bar{R}$) of conduction electrons in the metal. Thus F–D statistics is needed for conduction electrons in a typical metal.[18] Another example of a system that is not in the classical regime is the system that consists of the electrons of a star that has collapsed to a white dwarf. Although the white dwarf's temperature is high (typically T=10,000K on its surface[19]), its high electron concentration and the small mass of each electron precludes using a classical approximation, and again F–D statistics is required.[6] ## Three derivations of the Fermi–Dirac distribution ### Derivation starting with grand canonical distribution The Fermi-Dirac distribution, which applies only to a quantum system of non-interacting fermions, is easily derived from the grand canonical ensemble. In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature T and chemical potential µ fixed by the reservoir). Due to the non-interacting quality, each available single-particle level (with energy level ϵ) forms a separate thermodynamic system in contact with the reservoir. In other words, each single-particle level is a separate, tiny grand canonical ensemble. By the Pauli exclusion principle there are only two possible microstates for the single-particle level: no particle (energy E=0), or one particle (energy E=ϵ). The resulting partition function for that single-particle level therefore has just two terms: $\begin{align}\mathcal Z & = \exp(0(\mu - 0)/k_B T) + \exp(1(\mu - \epsilon)/k_B T) \\ & = 1 + \exp((\mu - \epsilon)/k_B T)\end{align}$ and the average particle number for that single-particle substate is given by $\langle N\rangle = k_B T \frac{1}{\mathcal Z} \left(\frac{\partial \mathcal Z}{\partial \mu}\right)_{V,T} = \frac{1}{\exp((\epsilon-\mu)/k_B T)+1}$ This result applies for each single-particle level, and thus gives the exact Fermi-Dirac distribution for the entire state of the system. ### Derivations starting with canonical distribution It is also possible to derive approximate Fermi-Dirac statistics in the canonical ensemble. These derivations are lengthy and only yield the above results in the asymptotic limit of a large number of particles. The reason for the inaccuracy is that the total number of fermions is conserved in the canonical ensemble, which contradicts the implication in Fermi-Dirac statistics that each energy level is filled independently from the others (which would require the number of particles to be flexible). ## See also Wikimedia Commons has media related to: Fermi-Dirac distribution ## References 1. Reif, F. (1965). Fundamentals of Statistical and Thermal Physics. McGraw–Hill. ISBN 978-0-07-051800-1. 2. Blakemore, J. S. (2002). Semiconductor Statistics. Dover. ISBN 978-0-486-49502-6. 3. Kittel, Charles (1971). Introduction to Solid State Physics (4th ed.). New York: John Wiley & Sons. ISBN 0-471-14286-7. OCLC 300039591. ## Footnotes 1. ^ a b Fermi, Enrico (1926). "Sulla quantizzazione del gas perfetto monoatomico". Rendiconti Lincei (in Italian) 3: 145–9. , translated as Zannoni, Alberto (transl.) (1999-12-14). "On the Quantization of the Monoatomic Ideal Gas". arXiv:cond-mat/9912229 [cond-mat.stat-mech]. 2. ^ a b Dirac, Paul A. M. (1926). "On the Theory of Quantum Mechanics". Proceedings of the Royal Society, Series A 112 (762): 661–77. Bibcode:1926RSPSA.112..661D. doi:10.1098/rspa.1926.0133. JSTOR 94692. 3. "History of Science: The Puzzle of the Bohr–Heisenberg Copenhagen Meeting". (Chicago) 4 (20). 2000-05-19. OCLC 43626035. Retrieved 2009-01-20. 4. Dirac, Paul A. M. (1967). Principles of Quantum Mechanics (revised 4th ed.). London: Oxford University Press. pp. 210–1. ISBN 978-0-19-852011-5. 5. ^ a b Fowler, Ralph H. (December 1926). "On dense matter". Monthly Notices of the Royal Astronomical Society 87: 114–22. Bibcode:1926MNRAS..87..114F. 6. Sommerfeld, Arnold (1927-10-14). "Zur Elektronentheorie der Metalle". 15 (41): 824–32. Bibcode:1927NW.....15..825S. doi:10.1007/BF01505083. 7. Fowler, Ralph H.; Nordheim, Lothar W. (1928-05-01). "Electron Emission in Intense Electric Fields" (PDF). Proceedings of the Royal Society A 119 (781): 173–81. Bibcode:1928RSPSA.119..173F. doi:10.1098/rspa.1928.0091. JSTOR 95023. 8. Kittel, Charles; Kroemer, Herbert (1980). Thermal Physics (2nd ed.). San Francisco: W. H. Freeman. p. 357. ISBN 978-0-7167-1088-2.  More than one of `|authorlink=`, `|authorlink=`, and `|author-link=` specified (help) 9. Note that $\bar{n}_i$ is also the probability that the state $i$ is occupied, since no more than one fermion can occupy the same state at the same time and $0 < \bar{n}_i < 1$. 10. These distributions over energies, rather than states, are sometimes called the Fermi–Dirac distribution too, but that terminology will not be used in this article. 11. Leighton, Robert B. (1959). Principles of Modern Physics. McGraw-Hill. p. 340. ISBN 978-0-07-037130-9. Note that in Eq. (1), $n(\epsilon) \,$ and $n_s \,$ correspond respectively to $\bar{n}_i$ and $\bar{n}(\epsilon_i)$ in this article. See also Eq. (32) on p. 339. 12. ^ a b 13. Mukai, Koji; Jim Lochner (1997). "Ask an Astrophysicist". NASA's Imagine the Universe. NASA Goddard Space Flight Center. Archived from the original on 2009-01-20. 14. ^ a b 15. See for example, Derivative - Definition via difference quotients, which gives the approximation f(a+h) ≈ f(a) + f '(a) h . 16. (Reif 1965, pp. 341–2) See Eq. 9.3.17 and Remark concerning the validity of the approximation. 17. By definition, the base e antilog of A is eA.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 114, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8493077158927917, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/247710/notation-for-repeated-application-of-function/247755
Notation for repeated application of function If I have the function $f(x)$ and I want to apply it $n$ times, what is the notation to use? For example, would $f(f(x))$ be $f_2(x)$, $f^2(x)$, or anything less cumbersome than $f(f(x))$? This is important especially since I am trying to couple this with a limit toward infinity. - 7 Answers You could define the notation recursively as a sequence of functions. Let $f_{n+1}(x) = f(f_n(x))$ for $n \geq 2$ with $f_1(x) = f(x)$. Sequence notation of this type is so generic that the reader will be forced to consult your definition, which will avoid any possible misinterpretation. - In the course I took on bifurcation theory we used the notation $$f^{\circ n}(x).$$ - 2 +1 This is more clear than the accepted notation. – Zchpyvr Nov 30 '12 at 2:57 This is the notation I prefer, no matter what the commonest usage is. – Lubin Nov 30 '12 at 3:07 2 I wonder if my professor invented this notation (he also wrote the textbook for the class, so it's hard to say). It makes sense, as $\circ$ is the notation for function composition. I personally prefer just $f^n(x)$, though, when the context is clear. Writing all those $\circ$s gets a little annoying after a while :) – asmeurer Nov 30 '12 at 3:26 1 I like this a lot I have to say +1 – Simon Hayward Nov 30 '12 at 9:31 1 Oh, nice! Unconventional, but unambiguous and nearly self-explanatory to anyone familiar with ∘ and the standard superscript notation. – camccann Nov 30 '12 at 15:27 show 2 more comments You can use the notation $f^n$ to denote the composition of the function with itself $n$ times, though this may also mean the product of $f$ with itself $n$ times. Just make sure you define your notation at the start. - You can use $f^n(x)$ BUT be sure to tell the reader that you mean functional iteration, not $(f(x))^n$. - 6 Also not $f^{(n)}(x)$ :D – Simon Hayward Nov 30 '12 at 9:30 If you take function iteration as a fold of self-composition, you can use a sum-like notation: $\bigcirc^nf = \underbrace{f \circ \dots \circ f}_{n\:\text{times}}$ Where: $\left({\bigcirc^0f}\right)(x) = x$ Granted, this is not very compact, and I would prefer to typeset the limit directly above the circle. That aside, it does combine tolerably with limit notation: $\lim_{n\rightarrow\infty}\bigcirc^nf$ - You may also use Lagrange's notation of derivative $\ f^{(n)}(x)\$ instead of more commonly used notations $\ f^n(x)\$ or $\ f_n(x)\$. EDIT: Or you can use left indices: $\ ^n f(x)\$ or $\ _n f(x)\$ or $\ ^{(n)} f(x)\$ or $\ _{(n)} f(x)\$. - 8 Wouldn't using Lagrange's notation be confusing since it already has a purpose? I don't want to make something analogous to $f^{-1}(f(x))=x$. That's a nightmare. – JShoe Nov 30 '12 at 2:52 There are two common notations for this, in two different communities: 1. People in quantum physics, functional analysis and similar topics write everything without parentheses and for them, naturally $B^2x=BBx=B(B(x))$. 2. Some people in albegra treat functions as "modifiers" and when $\alpha$ is a function, then they write the image of $x$ by $\alpha$ as $x\alpha$. For them, $x\alpha^2$ is what I would write $\alpha(\alpha(x))$. At any case, as others mention, you can use whatever you like, as long as you clarify your notations at the beginning. If you need it a lot, I would go for one of $$f^n(x),\qquad f_n(x),\qquad f^{\circ n}(x),\qquad f^{[n]}(x).$$ - Though most people do 1. only with linear functions (and seldom call them functions but operators). – leftaroundabout Dec 1 '12 at 17:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354521632194519, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/115138/particles-chasing-one-another-around-a-circle
## Particles chasing one another around a circle ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Two particles start out at random positions on a unit-circumference circle. Each has a random speed (distance per unit time) moving counterclockwise uniformly distributed within $[0,1]$. How long until they occupy the same position? In the example below, the red particle catches the green particle at $t=5.9$, i.e., nearly six times around the circle: The distribution of overtake-times is quite skewed, indicating perhaps the mean could be $\infty$. For example, in one simulation run, it took more than $3$ million times around the circle before one particle finally caught the other. So I don't trust the means I am seeing (about $25$). What is the distribution of overtake-times? I was initially studying $n$ particles on a circle, but $n=2$ seems already somewhat interesting... Update (2Dec12). Alexandre Eremenko concisely established that the expected overtake-time (the mean) is indeed $\infty$. But I wonder what is the median, or the mode? Simulations suggest the median is about $1.58$ and the mode of rounded overtake-times is $1$, reflecting a distribution highly skewed toward rapid overtake. (The median is suspiciously close to $\pi/2$ ...) Update (3Dec12). Fully answered now with Vaughn Climenhaga's derivation of the distribution, which shows that the median is $1 + \frac 1{\sqrt{3}} \approx 1.577$. - 7 run green particle run... aaargh! :( – Pietro Majer Dec 2 at 14:01 ## 2 Answers To answer your questions about median and mode, one can take Alexandre's answer a little further and compute the exact distribution function for the overtake-times. Note that the overtake-time doesn't depend on $v_1,v_2$ directly, but only on their difference. Call the difference $v$. Now $v$ is the difference of two uniformly distributed random variables on $[0,1]$, so it is supported on $[-1,1]$ with probability density function $1-|v|$. Moreover, since $\theta$ is uniformly distributed we can without loss of generality identify the cases $(v,\theta)$ and $(-v,1-\theta)$ and reduce everything to the following set-up: • $v$ is distributed on $[0,1]$ with density function $2(1-v)$. • $\theta$ is uniformly distributed on $[0,1]$. • The overtake-time is $t=\theta/v$. Now we can compute the cumulative density function for the overtake-time. Indeed, we have `$P(t<T) = P(\theta/v<T) = P(\theta < Tv)$`, which we can get by the following integral: ```$$ P(t<T) = \int_0^1 2(1-v) P(\theta < Tv | v) \,dv. $$``` The probability `$P(\theta < Tv | v)$` is given by the function $f(\theta,v) = \max(Tv,1)$. Thus for $T\leq 1$, we have $f(\theta,v)=Tv$ for all $v\in[0,1]$, so integrating gives `$P(t<T) = T/3$`, while for $T\geq 1$, we integrate and find ```$$ P(t<T) = \int_0^{1/T} 2(1-v)Tv\,dv + \int_{1/T}^1 2(1-v)\,dv = 1-\frac 1T + \frac 1{3T^2}. $$``` So in the end the cumulative density function for the overtake-time is ```$$ P(t<T) = \begin{cases} \frac T3 & T\leq 1, \\ 1 - \frac 1T + \frac 1{3T^2} & T \geq 1. \end{cases} $$``` The term $1/T$ in the last expression will give you the infinite mean, since upon differentiating the CDF you'll get a term $1/T^2$, which upon multiplying by $T$ and integrating to get the mean you end up integrating $1/T$ from $1$ to $\infty$. As for the median, it looks as though any proximity to $\pi/2$ is just a red herring, because solving for `$P(t<T) = 1/2$` yields $T=1 + \frac 1{\sqrt{3}} \approx 1.57735\dots$. - 1 Beautiful analysis! And so satisfying to see the exact median you computed matches the simulations. Thanks! – Joseph O'Rourke Dec 3 at 11:42 1 And fun to inadvertently learn that $\pi \approx 2(1 + 1/\sqrt{3})$. :-) – Joseph O'Rourke Dec 3 at 12:53 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let the circle has length $1$ unit. Let $\theta$ be the angle (anticlockwise) from the first particle to the second at the initial position. Let $v_1,v_2$ be the speeds of the particles. I suppose they move anti-clockwise, as in your movie. If $v_1>v_2$, they collide in time $T(v_1,v_2,\theta)=\theta/(v_1-v_2).$ If $v_2>v_1$, they collide in time $T(v_1,v_2,\theta)=(1-\theta)/(v_2-v_1)$. The expectation of the time is $$\int_Q T(v_1,v_2,\theta)dv_1dv_2d\theta,$$ where $Q=[0,1]^3$. The integral is easy to evaluate by breaking $Q$ into two pieces. But it is indeed $+\infty$, as you guessed:-) - Very clean and clear---Thanks, Alexandre! – Joseph O'Rourke Dec 2 at 14:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924084484577179, "perplexity_flag": "head"}
http://mathoverflow.net/questions/92595/unbounded-sequences-in-banach-spaces
## Unbounded sequences in Banach spaces ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a Banach space and let $T$ be a bounded operator acting on $X$. Suppose for each linearly independent unbounded sequence $(x_n)$ in $E$, the sequence $(Tx_n)$ is unbounded. Must $T$ be automatically Fredholm? (it has of course finitie-dimensional kernel). EDIT: Matthew's answer 'no' is sufficient to me. EDIT2: This question might be deleted. - The second question is answered "no": $T$ could be a non-Fredholm isometry, for example. – Matthew Daws Mar 29 2012 at 19:04 ## 2 Answers We can assume WLOG $X$ is infinite-dimensional. Since $\text{Ker}(T)$ is finite-dimensional, $\text{Ran}(T)$ is infinite-dimensional. I claim there is $C > 0$ such that for every $x \in X$, $\|Tx\| \ge C \|x\|$. If not, there a sequence $x_n \in X$ with $\|x_n\| = n$ and $\|Tx_n\| < 1/n$. If necessary perturbing the $x_n$ slightly, we can assume $x_n$ are linearly independent, and this will contradict your condition. Now this implies that $T$ is one-to-one, and that $\text{Ran}(T)$ is closed (because if $T x_n$ converges, $x_n$ is Cauchy and thus converges, and $\lim Tx_n = T(\lim x_n)$). However, $T$ is not necessarily Fredholm: it could be an isomorphism onto a closed proper subspace of $X$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Not sure about this, but I think this has something to do with the compactness of T? In any case, T is not necessarily a Fredholm operator. - I would have entered this as a comment if I could have, as I am not sure enough about it to make it an answer, but I am unable to comment. Oh well, good luck with this question! – Jodens Potends Mar 29 2012 at 20:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939799964427948, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/235753/how-to-invert-this-function/263362
# How to invert this function? I need to invert this function: $$y=\frac{\ln(x)}{\ln(x-1)}+1$$ - Just out of curiosity: where did you come across this problem? – Thomas Nov 12 '12 at 16:42 ## 3 Answers In general, $\dfrac{\ln a}{\ln b}\ne \ln(a-b)$. Remarks: $1.$ The false simplification was probably motivated by $\ln\left(\frac{a}{b}\right)=\ln a-\ln b$, which is true for positive $a$ and $b$. $2.$ (added) If $x\ne 1$, then the equation can be manipulated to $y\ln(x-1)=\ln x+\ln(x-1)$. We recognize $y\ln(x-1)$ as the logarithm of $(x-1)^y$. So we can rewrite our equation as $(x-1)^y=x(x-1)$, which, since $x\ne 1$, can be simplified to $(x-1)^{y-1}=x$. It is likely that the solution can be written in terms of the Lambert $W$-function. A solution in terms of elementary functions seems highly unlikely. - yes, exactly. Thank you. – TomDavies92 Nov 12 '12 at 16:20 Yes, this is wrong, because you can use the property of ln, only when $ln \left ( \frac{a}{b} \right )=ln(a) - ln(b)$ Try to use the inverse of ln, we know from lections that it is the exponential function e. - This function is a really interesting one, I think. I've been looking at a similar one for a while: $y=\frac{\ln (x+1)}{\ln x}$. In fact, I think that my function is just yours but slid up one unit on both axes. I'm fairly sure that the inverses of your function can't be expressed with a finite number of elementary functions, or even a finite number of a wide variety of functions like the Lambert W function or the error function. I wasn't aware of your question, but I asked about my version of the function, and it got a bit more attention than this did. Maybe you can get something out of the answers that I got. - It is trivial to show that your function is the same as the OPs under the translation $x \mapsto x+1$, $y\mapsto y-1$. Of course, terms and conditions apply at points of discontinuity, etc. – Arkamis Dec 21 '12 at 19:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552655816078186, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/81142/list
## Return to Answer 2 edited body Next, I suggest that you think about two special cases: (1) When $\pi_1(X)$ is trivial. This will involve using the Serre spectral sequence for the contractible path fibration over $X4 X$ with fiber $\Omega X$, including the multiplicative structure. (2) When $\pi_k(X)$ is trivial for all $k>1$, i.e. when $\tilde X$ is contractible. For example, when $X$ is a circle. Oh, actually Galatius is saying something wrong in this case. He probably should be using homology instead of cohomology ... 1 Next, I suggest that you think about two special cases: (1) When $\pi_1(X)$ is trivial. This will involve using the Serre spectral sequence for the contractible path fibration over $X4 with fiber$\Omega X\$, including the multiplicative structure. (2) When $\pi_k(X)$ is trivial for all $k>1$, i.e. when $\tilde X$ is contractible. For example, when $X$ is a circle. Oh, actually Galatius is saying something wrong in this case. He probably should be using homology instead of cohomology ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.897256076335907, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/2739/safely-use-cryptsignandencryptmessage?answertab=active
# Safely use CryptSignAndEncryptMessage? I am developing an application that sends messages which I want to encrypt and sign. The CryptoApi offers a function called CryptSignAndEncryptMessage. The description says, what this function actually does: The CryptSignAndEncryptMessage function creates a hash of the specified content, signs the hash, encrypts the content, hashes the encrypted contents and the signed hash, and then encodes both the encrypted content and the signed hash. The result is the same as if the hash were first signed and then encrypted. If I understood correctly, this solution could be susceptible to an attack called surreptitious forwarding? Surreptitious forwarding uses the naive "sign and encrypt" approach to allow B to forward a message of A, destined to B, to a third party C and make C think the message was from B, destined to C (although it was from A to B, and just forwarded by B). This is possible because B can decrypt the signed message and re-encrypt it for C. However, although a message can be forwarded "illegally", the actual message content cannot be changed. Does this in turn mean, if I include the receiver in the signed data, I am not susceptible to this attack? - 1 Don't you want to say "make C think the message was from A, destined to C"? I think there is no way to avoid the case described in your question, as "from B, destined to C" can just be faked by B signing and encrypting it again. – Paŭlo Ebermann♦ May 30 '12 at 11:53 ## 1 Answer If I understood correctly, this solution could be susceptible to an attack called surreptitious forwarding? If CryptSignAndEncryptMessage is implemented in the naive way, then it would seem that it is vulnerable to a forwarding attack. Microsoft has not published the exact details (AFAIK), so it is difficult to tell for sure. Does this in turn mean, if I include the receiver in the signed data, I am not susceptible to this attack? There is a published recommendation done by a few very respectable cryptographers. It is in Section 7 of On the Security of Joint Signature and Encryption, and is along similar lines to what you propose. Specifically, they recommend: 1. Whenever encrypting something, include the identity of the sender $ID_S$ together with the encrypted message. 2. Whenever signing something, include the identity of the receiver $ID_R$ together with the signed message. 3. On the receiving side, whenever either the identity of the sender or of the receiver do not match what is expected, output $\bot$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507045149803162, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/147771/rewriting-repeated-integer-division-with-multiplication/147832
# Rewriting repeated integer division with multiplication In many programming languages, such as C and C++, integer division of positive numbers is defined by simply ignoring the remainder. $5 / 2 == 2$. In general, is it true of positive integers $a$, $b$, and $c$ that $(a / b) / c$ will always give the same result as $a / (b * c)$ We can assume that $b$ and $c$ can be multiplied with no overflow. - ## 2 Answers This is very easy using the universal property of the floor function, viz. $$\rm n\le \lfloor r \rfloor \iff n\le r,\ \ \ for\ \ \ n\in \mathbb Z,\ r\in \mathbb R$$ Thus for $\rm\:0 < c\in \mathbb Z,\ r\in \mathbb R,\$ (e.g. $\rm\:r = a/b\in\mathbb Q\:$ in your case) $$\rm\begin{eqnarray} &\rm n &\le&\:\rm\ \lfloor \lfloor r \rfloor / c\rfloor \\ \iff& \rm n &\le&\ \ \rm \lfloor r \rfloor / c \\ \iff& \rm cn &\le&\ \ \rm \lfloor r \rfloor \\ \iff& \rm cn &\le&\ \ \rm r \\ \iff& \rm n &\le&\ \ \rm r/c \\ \iff& \rm n &\le&\ \ \rm \lfloor r/c \rfloor \\ \\ \Rightarrow\ \ \rm \lfloor \lfloor r\!\!&\rm \rfloor / c\rfloor\ &=&\rm\ \ \lfloor r/c\rfloor \end{eqnarray}$$ since integers are equal iff they have equal predecessors, i.e. $$\rm\:j = k\!\iff\! \{n:n\le j\} = \{n:n\le k\}\iff [\: n\le j\iff n\le k\:]\quad QED$$ For $\rm\:r = a/b\:$ we get your special case $\rm\ \lfloor \lfloor a/b \rfloor / c\rfloor = \lfloor a/(bc)\rfloor.$ If you know a little category theory you can view this universal property of floor as a right adjoint to inclusion, e.g. see Arturo's answer here or see most any textbook on category theory. But, of course, one need not know any category theory to understand the above proof. Indeed, I've had success explaining this (and similar universal-inspired proofs) to bright high-school students. - While both answers were helpful, I found this much easier to follow. – David Stone May 21 '12 at 23:24 – Gone May 24 '12 at 21:28 Note that $a$ has a unique representation as $k_1b+r_1$ where $0\leq r<b$. Here, the number $k_1$ is the result of of the integer division $a/b$. Also, $a$ has a unique representation as $l(bc)+s$, where $0\leq s<bc$, so that $a/(b\cdot c)=l$. Now $k_1$ can be represented as $k_2c+r_2$, with $0\leq r_2<c$, giving that $(a/b)/c=k_2$. Note that $a=k_2(bc)+r_2b+r_1$ and that $$0\leq r_2b+r_1\leq(c-1)b+r_1<(c-1)b+b=cb.$$ By unicity of $l$ and $s$, it follows that $k_2=l$. In other words, $(a/b)/c$ is indeed the same number as $a/(b\cdot c)$. - I'm in doubt about the notation for $a/b$. Maybe it is more natural for computer scientists to just denote it by $a/b$. But I'm not a computer scientist... Any advice? – Egbert May 21 '12 at 16:29 1 Most programming languages that I've seen simply use a/b, and if both a and b are integers, then the operation is performed with integer division. Python 3 introduced syntax that was in some other languages (that I don't know off the top of my head) of having `3 // 2 == 1` and `3 / 2 == 1.5` python.org/dev/peps/pep-0238 – David Stone May 21 '12 at 23:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482913017272949, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/246142/maxium-value-of-discrete-convolution
# Maxium value of discrete convolution I'm trying to calculate the maximum possible short-term energy $E[n]$ of a sampled signal $s$ in terms of $N$ and $\text{bitdepth}$. $$E[n] =\sum_{m=-\infty}^{\infty} s^2[n]w[n-m]$$ where $$w(n) = 0.54 - 0.46\; \cos \left ( \frac{2\pi n}{N-1} \right) \quad \text{(Hamming window)}$$ and $$-2^{bitdepth} \le s[n] \le 2^{bitdepth}$$ hope this makes sense -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8234259486198425, "perplexity_flag": "middle"}
http://math.stackexchange.com/tags/abelian-groups/info
Tag info About abelian-groups Should be used with the (group-theory) tag. A group $(G,*)$ is said to be abelian if $a*b=b*a$ for all $a,b\in G.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7445068955421448, "perplexity_flag": "middle"}
http://www.reference.com/browse/Artin-Schreier+covering
Definitions Nearby Words # Artin-Schreier theory See Artin-Schreier theorem for theory about real-closed fields. In mathematics, Artin-Schreier theory is a branch of Galois theory, and more specifically is a positive characteristic analogue of Kummer theory, for extensions of degree equal to the characteristic p. If K is a field of characteristic p, a prime number, any polynomial of the form $X^p - X + alpha,,$ for $alpha$ in K, is called an Artin-Schreier polynomial. It can be shown that when $alpha$ does not lie in the subset $\left\{ y in K , | , y=x^p-x ; mbox\left\{for \right\} x in K \right\}$, this polynomial is irreducible in K[X], and that its splitting field over K is a cyclic extension of K of degree p. The point is that for any root β, the number β + 1 is again a root. Conversely, any Galois extension of K of degree p (remember, p is equal to the characteristic of K) is the splitting field of an Artin-Schreier polynomial. This can be proved using additive counterparts of the methods involved in Kummer theory, such as Hilbert's theorem 90 and additive Galois cohomology. Artin-Schreier extensions, as are called those arising from Artin-Schreier polynomials, play a role in the theory of solvability by radicals, in characteristic p, representing one of the possible classes of extensions in a solvable chain. They also play a part in the theory of abelian varieties and their isogenies. In characteristic p, an isogeny of degree p of abelian varieties must, for their function fields, give either an Artin-Schreier extension or a purely inseparable extension. There is an analogue of Artin-Schreier theory which describes cyclic extensions in characteristic p of p-power degree (not just degree p itself), using Witt vectors, which were developed by Witt for precisely this reason.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9190995097160339, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/65027/is-this-function-bounded
# Is this function bounded? Suppose $f:\mathbb{R}\rightarrow\mathbb{R}$ is a $C^1$ function which satisfied the following differential inequality: $$\frac{df}{dt}\leq C(f+f^{\frac{3}{2}}).$$ If $f>0$ and $f(t)\rightarrow 0$ as $t\rightarrow\infty$, then is $f$ bounded? - ## 1 Answer No, any monotonically decreasing positive $C^1$ function that goes to infinity for $t\to-\infty$ is a counterexample, for instance $$f(t)=\begin{cases}1-t&t\lt0\\\mathrm e^{-t}&t\ge0\;.\end{cases}$$ - Thank you so much. Then I should modify my question then. – Paul Sep 16 '11 at 6:38 @Paul, You have already given that $f(t) \to 0$ as $t \to \infty$. So it is clearly bounded as $t \to \infty$. More precisely, the function is bounded in the interval $(a, \infty)$ for any $a \in \mathbb R$. – Srivatsan Sep 16 '11 at 6:42 @Srivatsan Narayanan: Yes, you are right! Actually I want to see if $\frac{df}{dt}$ is bounded or not as $t\rightarrow\infty$. – Paul Sep 16 '11 at 6:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393934011459351, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/acoustics+doppler-effect
# Tagged Questions 2answers 81 views ### In terms of the Doppler effect, what happens when the source is moving faster than the wave? I'm just trying to understand this problem from a qualitative perspective. The Doppler effect is commonly explained in terms of how a siren sounds higher in pitch as it is approaching a particular ... 2answers 74 views ### Independence of frequency in sound waves? Why does the frequency of sound wave depend only on the source? Why is the frequency and not any other "quality" independent of everything but the source? And that said, why is velocity and ... 1answer 210 views ### Meaning of negative frequency of sound wave Suppose that Alice and Bob are both holding speakers emitting sound at a frequency $f$. Alice is stationary while Bob is moving towards Alice at twice the speed of sound. In the case of Alice, if I ... 3answers 249 views ### Doppler effect “apparent frequency” In discussing Doppler effect, we use the word "apparent frequency". Does it mean that the frequency of the sound is still that of the source and it is some physiological phenomenon in the listener's ... 5answers 498 views ### Sound frequency of dropping bomb Everyone has seen cartoons of bombs being dropped, accompanied by a whistling sound as they drop. This sound gets lower in frequency as the bomb nears the ground. I've been lucky enough to not be ... 1answer 319 views ### Hearing a sound backwards because of Doppler effect Consider a supersonic plane (mach 2) aproaching a stationary sound source (e.g a fog horn on a boat). If I understand it correctly, the passengers in the plane can hear the sound twice. First at a 3 ... 3answers 251 views ### How many boats does it take to find an acoustic buoy by Doppler shift? Inspired by this question on the Doppler shift, suppose there is buoy somewhere on the surface of the ocean emitting a pure frequency. You get to place some boats wherever you want on the surface of ... 5answers 628 views ### Doppler effect of sound waves I am looking for interesting ways to introduce the Doppler effect to students. I want some situations in nature or every day life, where a student is possibly surprised and may ask "how could it be"? ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943615198135376, "perplexity_flag": "middle"}