url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://www.physicsforums.com/showthread.php?p=1351759
Physics Forums Thread Closed Page 1 of 2 1 2 > Singularity one-dimensional? Hello. I'm wondering if a singularity is one-dimensional. Thanks. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Science Advisor I'm assuming you mean a "curvature singularity", as treated in various spacetime models in gtr (or similar theories). The short answer is that consistently assigning a "dimension" to such a locus can be very difficult, particularly in Lorentzian spacetimes. The reasons are various and technical, but one point which might help a bit is to recall that singularities are not part of the spacetime; they are places where the manifold structure breaks down badly. The kinds of "geometric singularities" (i.e. loci where strange things happen independently of choice of coordinate chart) which can arise in gtr are much varied than most newbies probably appreciate. Interestingly enough, there is a rough classification by severity of singularity, or rather a bunch of classifications. A review is long overdue. In lieu of that, I offer an informal description of some examples: 1. The curvature singularity in the future interior of the Schwarzschild vacuum solution is a strong spacelike scalar curvature singularity. Here, "scalar singularity" means that the scalar curvature invariant $R_{abcd} \, R^{abcd}$, which is sometimes called the Kretschmann scalar, blows up. "Strong" (more precisely "crushing and destructive") means that every body whose world sheet runs into this singularity is crushed and destroyed. 2. "The" curvature singularity in the deep interior of the Kerr vacuum is a strong timelike scalar curvature singularity. Just to add to the confusion, in the maximal extension of this model (not physically realistic!) there are infinitely many different singular loci! 3. A "Big Bang" type curvature singularity in a typical cosmological model is generally a strong spacelike scalar curvature singularity. A famous conjecture, the BKL conjecture (after Belinsky-Khalatnikov-Lifschitz, the same Lifschitz who comes after Landau and whose name I am, some would say, mistransliterating, in order to avoid autobleeping ), says very very roughly that "generic" curvature singularities should have this nature, in some (not yet precisely known) sense of the term "generic". At various times, researchers have announced the solution to this conjecture, but its been around a long time and to paraphrase Piet Hein, when you poke a hard problem, it hits back! 4. Many pp-wave solutions (these are generalizations of plane waves in flat spacetime) possess lightlike nonscalar curvature singularities. Curvature singularities in pp-wave solution can never be "scalar curvature singularities" because all the scalar invariants of these spacetimes vanish identically. Some of these lightlike nonscalar curvature singularities are strong, as in the so-called waves of death. These are plane wave solutions (a special type of pp-wave--- the terminology is confusing but standard!) which propagate through the universe, destroying spacetime itself as they go. A directed beam with similar effects is called a thunderbolt. Even more intriguing, some of these singularities are weak lightlight nonscalar curvature singularities, which means they are possibly survivable. In the latter case, interestingly enough, the "weakness" is revealed by looking at the expansion tensor of timelike geodesic congruences, and finding that these do not diverge, even though the tidal forces may diverge! One way to think about this is that the tidal forces blow so quickly that small objects have no time to respond by being crushed or torn apart before they have already passed the singular locus! An interesting consequence in such cases is that gtr cannot predict what happens after they pass through such a singularity. 5. Many colliding plane wave (CPW) models feature strong spacelike curvature singularities which develop in the interaction zone after two plane waves collide. In some cases, these singularities are weaker than expected, which is of great interest since these models sometimes turn out to be locally isometric to the "shallow interior" of certain black hole models, and the weak singularity corresponds to the event horizon, i.e. this locus is not remarkable in terms of curvature; it's significance arises at the level of conformal or causal structure. In addition to these, CPW models generally feature lightlike but non-curvature singularities called fold singularities, which occur "ahead" of the departing waves. In fact, you can think of these (but probably shouldn't) as propagating backwards in time from the curvature singularities! Describing the geometry of a typical CPW model more accurately would take a lot of work, unfortunately. 6. The family of Weyl vacuum solutions, which can be written down in terms of choice of axisymmetric harmonic function, give all static axisymmetric vacuum solutions in gtr, so they correspond in Newtonian gravitation to axisymmetric gravitational potentials, i.e. axisymmetric harmonic functions. Which makes sense, except that the correspondence is quite tricky! Anyway, many of these turn out to have geometric singularities on the axis of symmetry which are often called struts and which have unrealistic features making them behave a bit like rigid rods which make no contribution to the gravitational field but can hold apart massive objects. They are generally regarded as artifacts due to inappropriate choice of boundary conditions. 7. Similarly, the Robinson-Trautman null dusts are a family of exact solutions which can be regarded as Schwarzschild holes perturbed by massless radiation. It turns out that these generally feature singularites sometimes called pipes, which have unrealistic properties and are again generally regarded as artifacts due to inappropriate choice of boundary conditions. 8. Many spacetime models also feature conical singularities which are analogous to the vertex of a paper cone. These loci are places where angular deficits (or angular excesses) are concentrated. For those who didn't understand very much of what I just said: I can't and don't really expect anyone who doesn't already know most of this stuff to understand very much. I just wanted to try to vaguely popularize the idea that it is not at all easy to single out characteristics common to all singularities; their taxonomy is simply too diverse. Recognitions: Gold Member Science Advisor Thanks, Chris! that's a fascinating list of various types of curvature singularity that occur in various models of spacetime----mainly I guess in the GTR (general theory of relativity) context. several I hadn't heard of, which were fun to imagine From my perspective (and I wonder if you would agree) it is important to stress that these singularities occur in man-made models and that doesn't automatically imply they ever occur in nature the original poster (O.P.) who asked the question may not be clear about this---many people aren't---and may be thinking of singularities as *real things that happen in nature*. So if you agree (you being the local GTR expert) I would like to add that singularities are places in an artificial model where that model breaks down and fails to compute reasonable numbers---say it starts giving infinities for the curvature if we are talking about GTR. the way you deal with singularities is you fix the model (if you can see a way to do that) so that it does not break down---sometimes *quantizing* a model will fix its singularities (it has been known to happen) and then you have to test the new model experimentally to check that it's better in other ways as well. I think the O.P. was asking about the dimensionality of singularities: are they one dimensional or two dimensional or what? Clearly from the examples you gave we should expect there to be singularities of all different dimensionality and physical extent. Sometimes people have the idea that the *big bang singularity* is pointlike. Actually as far as I know among professional cosmologists (please correct me if I am wrong) the most common picture is of an infinitely extending 3D hypersurface. People seem to get the idea that the singularity is pointlike because the word "singularity" sounds like "single" and the word "single" suggests a point. So I'd hasten to assure the O.P. that there is no one type of geometry that singularities must have---they can have various different dimensionality, and shape, and size. They can extend spatially off to infinity, or they can be spatially bounded. they are artificial loci where a model fails, and they can be as various as the models that give rise to them. and considerable research these days is devoted to getting rid of singularities (by replacing the model with one that doesnt break down). there was that 3-week workshop at Santa Barbara about it earlier this year. maybe the O.P. would like to check out the videos of some of the talks Singularity one-dimensional? I'll be looking up the bits and pieces with hope to grasp an overall idea. I was referring to the Big Bang Theory's singularity. I take it that's either not one-dimensional or it's unknown? Recognitions: Science Advisor Quote by physicsx0rz I'll be looking up the bits and pieces with hope to grasp an overall idea. I was referring to the Big Bang Theory's singularity. I take it that's either not one-dimensional or it's unknown? "Big Bang theory" is something of a misnomer. I was trying to explain, among other things, that "big bang type cosmological singularities" are strong, spacelike, scalar curvature singularities, the most destructive and the least avoidable, if you will. But none of these singularities really have "dimensions" in the sense you mean. It is true that you can readily embed the FRW dust with S^3 hyperslices orthogonal to the world lines of the dust particles in $E^(1,4)$, and then--- after supressing two dimensions so that you have two dimensional manifold embedded in $E^(1,2)$ --- it looks like an American style "football" with Big Bang and Big Crunch singularities corresponding to the two "tips". But this picture, while vivid, is misleading if it leads you to think of the singularities as pointlike. Remember, the embedding is artificial and introduces distracting irrelevancies. Recognitions: Science Advisor Quote by marcus that's a fascinating list of various types of curvature singularity that occur in various models of spacetime----mainly I guess in the GTR (general theory of relativity) context. All in gtr, but closely related theories will have similar aspects regarding plane waves and so on. Quote by marcus From my perspective (and I wonder if you would agree) it is important to stress that these singularities occur in man-made models and that doesn't automatically imply they ever occur in nature Wow, I am sure glad you asked because you must have partially misunderstood. I was trying to say that many exact solutions studied in gtr turn out to possess unphysical features such as struts or pipes. Careful authors (IMO) deprecate these, but many authors slur over their implausibility, which (IMO) can lead to seriously misleading attempted inferences about more realistic scenarios. I was trying to suggest that these features are highly suspect within the context of gtr itself. Contrast the strong spacelike scalar singularities which generally make perfect sense within gtr, and thus should be regarded as genuine predictions of this theory. Theoretical considerations external to gtr, e.g. musings on a possible quantum theory of gravity, tend to cast doubt upon whether gtr can give an accurate picture of spacetime at the Planck scale, but such curvatures are well beyond anything astronomers are likely to be able to observe in the foreseeable future! Quote by marcus the original poster (O.P.) who asked the question may not be clear about this---many people aren't---and may be thinking of singularities as *real things that happen in nature*. I am glad you asked, because at the level of classical physics, they should regard these as real features of gtr. Now, you said "which happen in nature", but I'd say that physics is about constructing theories which describe what specific measurements will show in various situations, or if you like about understanding "how things behave" rather than "what things really are". Do electrons "really exist in Nature"? I don't even know what that would mean. Does "the Sun" really exist in Nature? If your answer is "yes", do you think it has a well defined radius "in reality"? I'd say that "the Sun" is a convenient fiction, adding that this is nothing to worry too much about. What matters is that we have a good theory in which "the electron" is well-defined. We have good theories in which we can construct good models of idealized "stars" without worrying overmuch about what it might mean to say that stars "really exist". Does Homo sapiens "really exist"? Wise biologists would say that "species" are also a convenient fiction (Ernst Mayr wrote a book on this topic). Science is full of idealizations or abstractions which may not correspond to "reality" but which help one to construct and think about mathematical models which allow us to predict what will happen in a given situation, even to engineer devices which work. Quote by marcus So if you agree (you being the local GTR expert) I would like to add that singularities are places in an artificial model where that model breaks down and fails to compute reasonable numbers---say it starts giving infinities for the curvature if we are talking about GTR. Perhaps I don't know what you mean by "artificial". Or would you say that all mathematical models in all physical theories are "artificial"? Be this as it may, I was trying to explain, among other things, that when an object encounters a weak nonscalar singularity, the tidal forces measured by an observer riding on this object probably diverge, but so quickly that the expansion tensor of the congruence of world lines of bits of matter in the object doesn't diverge, i.e. the object doesn't have time to respond by being destroyed before the "bad place" is in the past. Quote by marcus the way you deal with singularities is you fix the model (if you can see a way to do that) so that it does not break down---sometimes *quantizing* a model will fix its singularities (it has been known to happen) and then you have to test the new model experimentally to check that it's better in other ways as well. Its crucial to understand that in the context of gtr, curvature singularities cannot be avoided or fixed up. For that matter, I don't think I agree that singularities in field theories generally can usually be "fixed up", e.g. point mass potential in Newtonian gravitation. Quote by marcus I think the O.P. was asking about the dimensionality of singularities: are they one dimensional or two dimensional or what? Agreed. Quote by marcus Clearly from the examples you gave we should expect there to be singularities of all different dimensionality and physical extent. No! I was not very clear about this because I lack the energy to try to explain any of the technicalities, which are formidable, but I was trying to state (not explain) that it is best not to try to assign any "dimension" to curvature singularities in Lorentzian manifolds. Quote by marcus Sometimes people have the idea that the *big bang singularity* is pointlike. Actually as far as I know among professional cosmologists (please correct me if I am wrong) the most common picture is of an infinitely extending 3D hypersurface. You're probably thinking of pictures in Weinberg, The First Three Minutes. Those are good pictures, and indeed the singular locus appears as a coordinate plane, but that locus does not belong to the manifold and you shouldn't think of it as having a dimension. As matter of fact, while I deprecated "pointlike", if you simply must think of it as having a dimension, pointlike would be infinitely better than sheetlike! Quote by marcus People seem to get the idea that the singularity is pointlike because the word "singularity" sounds like "single" and the word "single" suggests a point. Oh dear, oh dear, oh dear! I didn't mean to illustrate the fact I so often decry, that it simply isn't possible to discuss subtle theories in nonmathematical terms, but I am beginning to think it was a mistake to have tried to offer a nonmathematical sketch... Quote by marcus So I'd hasten to assure the O.P. that there is no one type of geometry that singularities must have---they can have various different dimensionality, and shape, and size. They can extend spatially off to infinity, or they can be spatially bounded. they are artificial loci where a model fails, and they can be as various as the models that give rise to them. There are few things in there which I vaguely recognize as bearing some resemblance to the intuition I was trying to convey, but unfortunately, for the most part I feel that is directly contrary to what I was trying to say! Quote by marcus and considerable research these days is devoted to getting rid of singularities (by replacing the model with one that doesnt break down). there was that 3-week workshop at Santa Barbara about it earlier this year. maybe the O.P. would like to check out the videos of some of the talks Huh? Oh well, I'm glad you liked my post, Marcus, even though I guess it's back to the drawing board for me as a populizer and you as a consumer of PF-style popsci Perhaps literally, in my case--- all this was happening while I was grabbing some stuff I somehow neglected to previously install on this machine, so that I could draw some conformal diagrams to illustrate my post above, but I can see that these figures would probably also be seriously misunderstood, unless someone first writes a PF tutorial on interpreting conformal diagrams! Recognitions: Gold Member Science Advisor Quote by physicsx0rz I'll be looking up the bits and pieces with hope to grasp an overall idea. I was referring to the Big Bang Theory's singularity. I take it that's either not one-dimensional or it's unknown? In case you want to look up the videos of this conference about the latest work on resolving singularities, here is the URL http://online.kitp.ucsb.edu/online/singular_m07/ "kitp" is the Kavli Institute for Theoretical Physics at UC Santa Barbara the people who gave those talks are one way or another involved with the question of what to replace GTR with so as to get rid of the singularities. especially the big bang singularity. we know GTR is wrong because it breaks down at a certain places and has these unnatural glitches called singularities. so the question is, what do you replace GTR with so that it will be just as good as GTR where GTR is a success but not have these unnatural failure glitches. what you see in that list of talks is kind of the frontline leading edge research in various approaches to replacing GTR with something that is not subject to singularities. It is hard to do. Also the proponents of the different approaches argue a lot amongst themselves. I don't recommend watching these hourlong videos, it would take too much time and be incomprehensible. Just realize they are there. We have no scientific evidence that singularties exist in nature. World class people are working on developing a spacetime model that won't have a bigbang singularity. Recognitions: Science Advisor Marcus, I feel you have seriously misunderstood the scientific rationale for seeking a quantum theory of gravity. Many people seem to hate event horizons and the notion of a beginning or an end on religious or philosphical grounds. It is good to remember that such prejudices can blind you to how Nature behaves. This might also explain how you so badly misunderstood what I wrote. I advise you not to rush to attribute to either myself or to the researchers you mention motivations which we might not share, or even understand. Recognitions: Gold Member Science Advisor Chris - Thank you for your posts here, there has been a consensus of opinion on these Forums that when GR meets QT in the singularity of a BH or BB it will be GR that breaks down and the presence of singularities in GR proves that in these regimes GR breaks down and the singularities are 'unphysical'. I am glad you are shooting for the other side. Why don't you write something on interpreting conformal diagrams and illustrate you ideas here? Garth Recognitions: Science Advisor Hi, Garth, thanks for the encouragement since I was well and truly pole-axed by what Marcus wrote in his last two posts! Pig-stuck. Thrown for a loop. Whatever Athapaskan verb means "caused to wonder whether the entire world has gone utterly mad". You get the picture Let me try again to briefly sketch my take on (1) event horizons and curvature singularities and (2) the people who loathe them (I mean deep-down felt-in-the-gut fear and loathing): 1. These are are certainly real predictions of gtr, and thus should be expected to "occur in Nature" (apparently this is a phrase which can be more badly understood than I had recalled!) in regimes where gtr is accurate. 2. To judge from their own writings, the people who loathe the very notion of event horizons or destructive curvature singularities do so because they fearfully believe these notions, if valid, would "scientifically disprove" the core beliefs of their world view. I believe that Fear of Event Horizons and Fear of Singularities are irrational fears which arise from a more plausible human fear: fear of isolation and fear of death. People simply need not to confuse issues in theoretical physics with the issue of their own mortality, a psychological (spiritual? existential?) problem for which I can offer no assistance. Ironically, in my view, it seems clear that according to current mainstream belief, regions of strong curvature can kill humans, but there are many more likely ways to be Reaped. IOW, people who loathe black holes should probably loathe the family car. Similarly for people who fear the prospect that one day humans will become extinct. In a way, I feel they may take gtr (and science generally) much too seriously by reacting to scientific theories which employ (often very abstract) notions which seem contrary to their world view as if these notions might "really exist in Nature" in some "absolute" sense, which I feel is naive. In fact, scientific knowledge is far more supple and adaptable thing. Science loathers suffer at once from too much imagination--- and too little! Marcus, you probably feel from the above that I have misunderstood your motivations and beliefs--- and if so you're probably right. To prevent further misunderstanding, I emphasize that the above comments are based on my prior experience with dozens of persons who seem to express strong negative feelings about black holes. I hope that my psychological speculations will smooth the suddenly troubled waters in this thread, rather than fanning the flames of controversy! If that isn't mixing metaphors. I'm trying to advise the singularity loathers: lighten up! No pun intended Recognitions: Gold Member Science Advisor Quote by Chris Hillman ... Many people seem to hate event horizons and the notion of a beginning or an end on religious or philosphical grounds. ... I advise you not to rush to attribute to either myself or to the researchers you mention motivations which we might not share, or even understand. dont know these many people or how they seem to you. I was not attributing motivations to you...certainly not analyzing your motives, Chris I just think that General Relativity fails as a theory (has a limited domain of applicability) and will eventually be replaced by some better theory which does not break down at some of these singularities instead of singularities I would prefer to call them "limits to applicability where the model blows up" In the past other theories have had singularities---and this has been recognized as a flaw, or sign that the theory was of limited usefulness---and they have been replaced by better theories. I suppose my attitude is conservative---I expect that Gen Rel is no exception and that science will proceed as usual and Gen Rel will be replaced by something a bit more rugged that doesn't suffer from the same problems. Recognitions: Gold Member Science Advisor Quote by Garth ... a consensus of opinion on these Forums that when GR meets QT in the singularity of a BH or BB it will be GR that breaks down and the presence of singularities in GR proves that in these regimes GR breaks down and the singularities are 'unphysical'. I am glad you are shooting for the other side... I don't see the consensus you refer to---it doesnt include me. Nor do I see a combat between two sides shooting at each other. It seems to me that both conventional quantum theory and Gen Rel have problems. It strikes me as simplistic or naive to suppose that, when they are joined, one or the other will be a "winner". I don't see the current state of physics theory as a contest between quantum theory and GR. I see something more constructive than that going on. I don't see it as appropriate to argue about which theory is going to "win" and to think of opposing sides rooting or shooting for one or the other. My guess would be that merging quantum theory and spacetime theory is a creative effort that will involve learning how to modify both. Recognitions: Science Advisor Now you are sounding reasonable again to me, Marcus! Just drop the bit about insisting that whatever the first workable quantum theory of gravity looks like, it will neccessarily "exorcise" singularities or whatever, because that is not at all clear and it may well turn out not be true. Recognitions: Gold Member Science Advisor Quote by Chris Hillman Now you are sounding reasonable again to me, Marcus!.. I am glad that you find me reasonable, Chris. I'm not aware of having shifted my basic position---but I can't always account for how you take what I say. One of the enjoyments of observing the progress of scientific research is that one really cannot predict how things will go (at least I feel that I cannot.) One can have *expectations* however. I see that a considerable number of smart people consider the old (1915) Gen Rel to be flawed because it suffers from singularities (such as the BB and BH, in particular) and a growing number of people are searching for a theory of spacetime and matter to replace Gen Rel---duplicating its impressive success where it does work and extending coverage to situations where Gen Rel breaks down. If this search succeeds, which I expect it to, it will in a certain sense replace the singularities with a deeper understanding of what goes on in, and possibly also beyond, them. the conventional meaning of a singularity is where a physical theory breaks down. So you could say that the singularity is removed or resolved when you get a new theory which does not break down there. But if you prefer, when that happens I suppose you could use the word in a slightly different way and say that *the singularity is still there, we just understand better what goes on there* Some people call what replaces the former BB singularity in their models by the name "the Planck regime"-----I don't pretend to understand what is meant by that----allegedly in certain cases the model cranks along smoothly thru the former singularity, but usual ideas of space and time momentarily cease to apply. Recognitions: Science Advisor Quote by marcus I am glad that you find me reasonable, Chris. I'm not aware of having shifted my basic position---but I can't always account for how you take what I say. Unfortunately, I am back to being flummoxed by something you just wrote which I consider to be potentially seriously misleading. You have consistently written statements in which I agree with the second half but not with the first half! So let me reverse the order of those statements: The second halves of these statements are correct summaries of the current mainstream: Quote by marcus people are searching for a theory of spacetime and matter to replace Gen Rel---duplicating its impressive success where it does work and extending coverage to situations where Gen Rel breaks down. Quote by marcus the question is, what do you replace GTR with so that it will be just as good as GTR where GTR is a success but [be valid more generally than gtr]. I agree entirely! Furthermore, I think we all agree that the search for a new theory of gravitation is a thoroughly mainstream activity. (In this context, it is amusing to note that the mathematician John Baez, author of the semi-humorous Crackpot Index, has contributed to this effort.) But the first halves of those statements are seriously misleading: Quote by marcus I see that a considerable number of smart people consider the old (1915) Gen Rel to be flawed because it suffers from singularities (such as the BB and BH, in particular) Quote by marcus we know GTR is wrong because it breaks down at a certain places and has these unnatural glitches called singularities. My objection is that in statements like this you suggest the misleading conclusion that the object of the mainstream effort is to exorcise black holes and the Hot Big Bang Theory from astrophysics and cosmology. This is quite untrue. As I thought everyone knew, the object of the mainstream effort is to 1. reconcile quantum mechanics with a classical field theory, general relativity, 2. elucidate some fascinating connections between the notion of black holes and notions of thermodynamics. An important point regarding (2) is that it could well be that the next "gold standard theory of gravitation" might be more "thermodynamical" than "quantum". One of the fascinating trends in physics in the past few decades has involved growing recognition that mathematical techniques developed in the context of classical or quantum physics turn out to apply to the other arena. In addition, in the past decade there has been a good deal of work on nongravitational analogs of black holes which suggests that this notion may be best understood via thermodynamics. Regarding the current mainstream view on the major technical issues within gtr itself, including dealing with various kinds of geometric singularities, I could give many citations, but one short book which I particularly like is the Chandrasekhar memorial volume edited by Robert Wald, Black Holes and Relativistic Stars, University of Chicago Press, 1998. I'd highly recommend this to anyone who wants to know more about current mainstream views on theoretical issues in gtr and the search for a "better theory". Before I say anything else, I need to stress something: your personal objections to the notion that black holes "really exist in Nature" seem to be based upon the prediction in gtr that curvature singularities exist inside the horizons. It's important that newbies understand that historically, mainstream objections to the notion of black holes (pre 1975 or so) have really been objections to the notion of "event horizon", which should be thought of as the defining characteristic of "black hole"; as my list above should make clear, many exact solutions in gtr which are nothing like black holes, including plane waves, exhibit curvature singularities. Furthermore, many exact solutions which can be regarded as cosmological models (but certainly not as models which resmemble the Universe in which we live), such as the Goedel dust, contain no curvature singularities. So the existence of curvature singularities is certainly not a defining characteristic of either black holes or cosmological models! The above mentioned objections quickly moved to the fringe with the discovery of objects which can (according to the current mainstream viewpoint) only be interpreted as black holes in the sense of gtr. Vaguely similar objections are still promoted on the web, sometimes including PF; these should be regarded as incorrect crank opinions which are greatly at variance with the current scientific mainstream. OK, back to the book: the chapters are based upon talks delivered in 1996, subsequently revised by the various authors, but despite a major development in cosmology (the cosmological constant thing), the mainstream has not budged on the points which are most relevant here. Some particularly relevant chapters: 1. Martin Rees, "Astrophysical Evidence for Black Holes": Martin Rees recounts how and why it suddenly became universally accepted that black holes (think: event horizons) "really exist in Nature", modulo my comments above about the nature of physics. Rees certainly does not say that mainstream researchers consider "Gen Rel to be flawed because it suffers from singularities (such as the BB and BH", or anything even close to that statement. Your statement does, however, somewhat resemble the early objections (c. 1960) to the Hot Big Bang Theory back when Continuous Creation was still regarded as viable, and to early objections to Black Holes (think "event horizons", not "curvature singularities") back when (c. 1975) the suggestion that black holes are common objects in our Universe was considered highly speculative and dubious by most physicists. 2. Roger Penrose, "The Question of Cosmic Censorship": Penrose discusses some theoretical issues involving Cauchy horizons and geometric singularities in gtr, which have not yet been resolved within gtr (or by going beyond it). In particular, he discusses "thunderbolts". You will search in vain for any assertions that gtr is unworkable because of the mere existence of singularities, rather, Penrose and many others have put great effort into understanding the nature of generic solutions of the EFE within the context of gtr, effort predicated on the assumption that the theory, while clearly difficult, is not fundamentally flawed simply because singularities exist. You will search in vain for any statements to the effect that Penrose himself or other researchers consider "Gen Rel to be flawed because it suffers from singularities (such as the BB and BH)". 3. Werner Israel, "The Internal Structure of Black Holes": Israel discusses "mass inflation" and the question of what "generic" black hole models look like in gtr. That is, the maximal extension of the Schwarzschild and Kerr solutions are regarded as "physically unrealistic" due to simple but unrealistic choice of boundary conditions. Specifically, the black holes which apparently exist in nature are thought to have been formed by gravitational collapse and therefore have very different causal structure inside the horizon. Penrose, Israel, and others, working with gtr itself, have discussed theoretical considerations suggesting that the interior of black holes "as they really exist in Nature" is quite different from what the Schwarzschild and Kerr vacuum solutions would suggest. One line of attack on elucidating this issue involves the remarkable local isometry discovered by Chandrasekhar between the "shallow interior" of the Kerr vacuum and a certain CPW model, the Chandrasekhar-Xanthopolous vacuum. By perturbing the two incoming waves of this model one obtains exact solutions which are locally isometric to a perturbation of the shallow interior of the Kerr vacuum. Again, one will search in vain for any statements to the effect that Israel himself or other researchers consider "Gen Rel to be flawed because it suffers from singularities (such as the BB and BH)". In addition to these chapters, in Part I of the book, the entirety of Part II is devoted to survey articles on the motivations for the search for a new theory of gravitation. These survey articles can be readily supplemented by others available at the arXiv, including papers by Rovelli on the motivation for the seach for quantum gravity and the original paper by Jacobson on an important reinterpretation of the Einstein field equation. If anyone, after consulting these resources, is unconvinced that I am correctly describing the motivations for mainstream efforts working toward a quantum theory of gravity, I'd suggest posting a query in sci.physics.research specifically asking for responses from John Baez, Steve Carlip and Ted Jacobson, all of whom read that newsgroup at least sometimes and all of whom have contributed to the search under discussion. (To prevent further misunderstanding, I'd request that anyone following this suggestion include the URL of this PF thread.) Quote by marcus If this search succeeds, which I expect it to, it will in a certain sense replace the singularities with a deeper understanding of what goes on in, and possibly also beyond, them. Regarding this search, I feel your statements require some further qualifications. You used the phrase "replacement" and "better theory" in your posts. These are weasel words which could easily mislead students and the general public if left unaccompanied by suitable qualification. You could say that Newtonian gravitation was "replaced" by gtr in 1919, when the first solar system test decisively agreed* with gtr and disagreed with Newtonian theory. But it is important for students and the general public to understand that Newtonian gravitation is alive and well, and for good reason: it's much simpler to work with, so much so that it makes good sense to use it whenever you can get away with this. In particular, vacuum solutions in Newtonian theory are governed by harmonic functions, which are rather well understood mathematically. Contrast the solution space of the vacuum EFE, which after 90 odd years is still not well understood mathematically. (See again the articles by Penrose and Israel.) Or contrast the way in which Newtonian gravitation has been employed for many decades to study statistically the evolution of stellar clusters and note that relativistic elaborations have recently become popular topics of research. For all these reasons, I prefer to say that Newtonian gravitation is known to break down under certain circumstances. We know how to tell when we should work with gtr instead, and we have some theoretical arguments suggesting an upper bound for the curvatures/energies at which we think that gtr too must break down. (*Modulo later assertions that Eddington's data analysis was flawed--- let's not get into that; suffice it to say that gtr has been tested very thoroughly and has held up very well indeed. There is no doubt that the four classical solar system tests, and some even more impressive tests as well, give results in excellent agreement with gtr.) You could say that a quantum theory of gravitation will be a "better theory" than gtr, simply because gtr is a classical field theory, yet nothing has been better confirmed by twentieth century physics than the fact that Nature adores the quantum. This theoretical conflict at the very heart of physics is aesthetically objectionable, as I think almost everyone would agree. But it is important to stress that ultimately, the true test of which of two theories is "better" is which agrees better with observation and experiment. Here, we have a problem, because it is not yet clear that experimental tests of the long sought theory of quantum gravity which could decisively confirm the expected breakdown of gtr under certain conditions can be conducted in the forseeable future. This leads to discussion of some philosphical issues which arise from the prediction of event horizons in gtr, and the search for a self-consistent quantum theory of gravitation, issues which seem to challenge the Baconian notion of the scientific method. However, discussion of these issues should probably move to the philosophy subforum. Quote by marcus the conventional meaning of a singularity is where a physical theory breaks down. There's more to it than that, I think! Context is everything. You are probably thinking of the broad usage described in such sources as http://mathworld.wolfram.com/Singularity.html (which is discussing how the term is used in mathematics generally, especially analysis, including applied mathematics, including physics). For a previous discussion at PF, see http://www.physicsforums.com/showthread.php?t=124016 Quote by marcus So you could say that the singularity is removed or resolved when you get a new theory which does not break down there. I agree that removing the coordinate singularity in the Schwarzschild exterior chart by passing to a new chart, such as the ingoing Eddington chart, is analogous to removing a removeable singularity when studying some holomorphic function in complex analysis. I might even agree that geometric singularites in gtr are somewhat analogous to non-removeable singularities of holomorphic functions. One has to be very careful not to try to push this analogy too far, however. In particular, the natural smoothness requirement in gtr is $C^\infty$ or less , depending on context. As has been hinted at above, the maximal real analytic extensions of the exterior Schwarzschild and Kerr vacuums are considered to be unrealistic; to obtain reasonable boundary conditions you must drop the assumption of analyticity. The reason is that analytic functions are much too "rigid"; knowledge of the derivatives at some point determines the function in an entire neighborhood. To deal with radiation and avoid undesirable asymptotic properties we generally need to work with functions built out of "bump functions", which are not analytic. ADDENDUM: thanks for the link the Kavli Institute conference, Marcus! The very first slide I examined, Slide 04 from the talk by Beverly Berger, obviously illustrates a piece of the issue I alluded to above, that the interior of astrophysical black holes is currently believed to be somewhat similar to the future interior of the Schwarzschild vacuum but utterly unlike the RN electrovacuum or Kerr vacuum. As I noted, even discussing this issue, while natural within the context of gtr, appears to raise some startling philosophical challenges to the Baconian model of the scientific method! Slide 06 refers refers obliquely models (particularly the mixmaster model) with which I am familiar. In another recent post I wrote out the Bianchi II analog for the classical (Bianchi IX) mixmaster model. These are homogeneous but anisotropic exact dust solutions, expressed in terms of a certain ODE (a different one for each of the different Bianchi types) which feature a "Big Bang type" strong spacelike scalar curvature singularity. The BKL conjecture originally arose the context of asserting that the approach to a generic curvature singularity in gtr would resemble the behavior of the mixmaster model. Slide 11 illustrates the time evolution of the Kasner exponents for the vacuum limit of the Bianchi I model (aka Kasner dust). I have investigated this in detail for all the models. Slide 13 shows the result of computing the Kretschmann scalar; this confirms what I just said, that the singularity is a scalar curvature singularity. And this is cool! Slide 16 illustrates the vacuum limit of the very Bianchi II model I just mentioned. See my Post # 4 in the thread http://www.physicsforums.com/showthread.php?t=168995 The investigation I just mentioned showed that the Bianchi II model is similar to the Bianchi IX model, but the others can exhibit rather different behavior. As she says, the fascinating thing about the Mixmaster model is the infinite sequence of "Kasner epochs", with transitions being governed (Slide 22) by the expansion of a simple continued fraction! Since continued fractions came up in my diss (on generalizations of Penrose tilings!) I have always found that fascinating! Slide 30 is related to something I obliquely alluded to and have discussed elsewhere at much greater length--- adding a massless scalar field or massless radiation to a CPW model can drastically change the nature of the curvature singualarity. Recall that such models can be locally isometric to models of black hole interiors (at least, roughly speaking "the outer half"). Slide 65 is worth bookmarking as a good illustration of current thinking on the topic of the article by Werner Israel cited above http://online.kitp.ucsb.edu/online/s...ger/oh/65.html The null singularity is thought to be weak and possibly survivable, in which case it would also function as a Cauchy Horizon (CH; no relation). 69 slides, wow--- how long did this talk last?! But a great set of slides, nonetheless. But unfortunately, there seems to be something wrong with the links to many of the slides; many of them seem to be duplicates, as if someone made a goof when uploading the slides by hand I hope she turns these slides into a proper survey article. Quote by marcus But if you prefer, when that happens I suppose you could use the word in a slightly different way and say that *the singularity is still there, we just understand better what goes on there* Some people call what replaces the former BB singularity in their models by the name "the Planck regime"-----I don't pretend to understand what is meant by that----allegedly in certain cases the model cranks along smoothly thru the former singularity, but usual ideas of space and time momentarily cease to apply. The term Planck regime generally refers to sectional curvatures (which have the same units as energy density in relativistic units) associated with energies approaching the Planck energy, which is generally regarded as the upper bound of the region where, in theory, gtr might be valid. This regime lies far, far beyond the limits of the regime where observation and experiment have determined that gtr is valid within current error bars. It is currently believed that gtr should ultimately turn out to be useful, as a fundamental theory of gravitation, well beyond the curvatures expected near the exterior/interior of stellar mass black holes, but also generally acknowleged that good models in the context of gtr might require appeal some "effective field theory" taking account of quantum effects (see also the semiclassical approximation for the exterior), and also that gtr might break down at smaller energies than the Planck energy. I have offered above some citations which I feel will help interested lurkers to better understand the current mainstream viewpoint concerning both theoretical problems in gtr and the motivations for the "next generation gravitation theory". Since I've gone to considerable effort to try to clarify these issues, I hope that at least some lurkers with a serious interest in modern astrophysics will follow up by studying these citations. I stress that I hope and believe that the articles in the above cited book will make at least some sense to those lacking mathematical or physical background! I also feel that the horrified reaction in some segments of the general population to black holes and to modern cosmology are based largely upon serious misunderstandings of what these notions, and science generally, really concern. Quote by Chris Hillman ...suffice it to say that gtr has been tested very thoroughly and has held up very well indeed. I disagree with that notion. Most of the GR tests are weak field tests. No single test has been made that would indicate a singularity exists in nature. While GR is a wonderful theory the usability has been mostly exaggerated. Apart from a set of "Mickey Mouse" solutions not even a simple two body situation can be modeled without great difficulties. Some people fall in love with a theory, sometimes they have invested a lifetime of work into it, and then feel a need to defend it to the teeth, they would only "allow" changes that extend and not invalidate earlier work. Emotions can run pretty high, even for different views within the same theory. We only have to look at Eddington's quite appalling behavior towards Chandrasekhar in trying to discredit him. As in the case of Newton's theory also Einstein's theory will be surpassed. And that could mean a complete paradigm shift, not just some adjustments. Recognitions: Science Advisor Oh dear, my post grew too long! I just wanted to add, after "best understood via thermodynamics": It is true that it is widely recognized that one possible "side benefit" of a successful next generation gravitation theory is that it may well clarify the physical nature of the putative curvature singularities and other geometric singularities of gtr. But this is not guaranteed. In particular, it is widely recognized that the next generation theory may not in any sense "exorcise" the features which lead (in gtr considered as some kind of limiting case of this yet unknown next generation thory) to the appearance of curvature singularities. However, I would emphasize that the most troubling kind of "singularity" which arises in gtr is probably weak singularities which function as Cauchy horizons, as in the slide from Beverly Berger's talk which I archived above, because gtr refuses to predict what happens after an object passes through the locus. This really should be regarded, I think, as a serious theoretical defect of gtr. To anthropomorphize, one can hope that the next generation theory will not say in such situations, "and after this, something interesting might well happen, but if so I haven't a clue what that might be!" But unfortunately, at this point it seems that nothing is guaranteed. OK, I hope we are all converging on agreement regarding all the fundamental points now! Thread Closed Page 1 of 2 1 2 > Thread Tools | | | | |---------------------------------------------------|---------------------------|---------| | Similar Threads for: Singularity one-dimensional? | | | | Thread | Forum | Replies | | | Astrophysics | 24 | | | General Astronomy | 17 | | | Linear & Abstract Algebra | 11 | | | General Physics | 6 | | | Biology | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9583228826522827, "perplexity_flag": "middle"}
http://mathhelpforum.com/number-theory/190751-dividing-square-different-parallel-lines-minimize-distance-between.html
# Thread: 1. ## Dividing a square with different parallel lines and minimize distance between. A square of side 1 is divided into "a" strips by "a-1" equally spaced red lines parallel to a side, and into "b" strips by "b-1" equally spaced blue lines parallel to the red lines. Suppose that a does not divide b and that b does not divide a. What is the smallest possible distance between a red line and a blue line? Edit: Actually, I think case one may be that assume (a,b) =/= 1. In that case since a does not divide b and b does not divide a, there must be at least one c such that (a,b) = c. In this case the red and blue lines most coincide, and so the minimum distance is 0. What about the case when (a,b) = 1? 2. ## Re: Dividing a square with different parallel lines and minimize distance between. Originally Posted by libzdolce A square of side 1 is divided into "a" strips by "a-1" equally spaced red lines parallel to a side, and into "b" strips by "b-1" equally spaced blue lines parallel to the red lines. Suppose that a does not divide b and that b does not divide a. What is the smallest possible distance between a red line and a blue line? Edit: Actually, I think case one may be that assume (a,b) =/= 1. In that case since a does not divide b and b does not divide a, there must be at least one c such that (a,b) = c. In this case the red and blue lines most coincide, and so the minimum distance is 0. What about the case when (a,b) = 1? I assume you mean that the strips must be of equal width, so that the red lines are at distances $k/a\ (1\leqslant k\leqslant a-1)$ from one side of the square, and similarly for the blue lines. It then looks as though the minimum distance from a red line to a blue line should be $1/(ab).$ Reason: given that (a,b) = 1, there exist integers p, q such that $|pa-qb| = 1.$ Then $\left|\tfrac pb - \tfrac qa\right| = \tfrac1{ab}.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483262896537781, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/157042/a-limit-that-involves-prime-numbers/157048
# A limit that involves prime numbers Let be $p_{n}$ the nth prime number and $(a_{n}), n\geq1$ such that: $$a_{n}=\frac{1}{p_1}+\frac{1}{p_2}+\cdots+\frac{1}{p_n}$$ By using this result, $$\lim_{n\rightarrow\infty} \frac{p_{1}}{p_{1}-1} \frac{p_{2}}{p_{2}-1}\cdots\frac{p_{n}}{p_{n}-1}=\infty$$ I have to prove that $\lim_{n\to\infty} a_{n} = \infty$. - Wait. I saw something before and wrote the answer accordingly. – user17762 Jun 11 '12 at 17:18 Thanks for all your answers. – Chris's wise sister Jun 11 '12 at 19:42 ## 3 Answers Absolute convergence of $\displaystyle \prod_{k=1}^{\infty} \left( 1+a_k\right)$ means that $\displaystyle \sum_{k=1}^{\infty} \lvert a_k \rvert$ converges. Further if $a_k$'s are positive, then we have the following inequalities. $$1 + \sum_{k=1}^{\infty} a_k \leq \displaystyle \prod_{k=1}^{\infty} \left( 1+a_k\right) \leq \exp \left( \sum_{k=1}^{\infty} a_k \right)$$ Hence, if $a_k$'s are positive, then $\displaystyle \prod_{k=1}^{\infty} \left( 1+a_k\right)$ converges iff $\displaystyle \sum_{k=1}^{\infty} a_k$ converges. Since we are given that $\displaystyle \prod_{k=1}^{\infty} \left(\dfrac{p_k}{p_k-1} \right) = \prod_{k=1}^{\infty} \left(1 + \dfrac1{p_k-1} \right)$ diverges, we have that $$\sum_{k=1}^{\infty} \dfrac1{p_k-1}$$ diverges i.e. if $b_n = \displaystyle \sum_{k=1}^{n} \dfrac1{p_k-1}$, then $\displaystyle \lim_{n \rightarrow \infty} b_n = \infty$. Now note that \begin{align} a_n & = \dfrac1{p_1} + \dfrac1{p_2} + \dfrac1{p_3} + \cdots + \dfrac1{p_n} > \dfrac1{p_2-1} + \dfrac1{p_3-1} + \dfrac1{p_4-1} + \cdots + \dfrac1{p_n-1} + \dfrac1{p_{n+1}-1}\\ & = b_{n+1} - \dfrac1{p_1-1} = b_{n+1} - 1 \end{align} The above is true since successive prime differ at-least by $1$ i.e. $p_k < p_{k+1}-1$. Now you letting $n \to \infty$, we get what you want. - I think you meant in your first definition absolute convergence, @Marvis. Otherwise one has to drop the absolute value from the series. – DonAntonio Jun 11 '12 at 18:45 @DonAntonio Thanks for pointing it out. I have now edited it accordingly. – user17762 Jun 11 '12 at 18:51 Hint: Each term of your product can be expressed in the shape $\dfrac{1}{1-\frac{1}{p}}$. Take the logarithm of your product, and use an estimate for $\ln(1+x)$ when $|x|$ is close to $0$. Added: To complete the proof, note that $\ln\left(\frac{1}{1-x}\right)=-\ln(1-x)=x+\frac{x^2}{2}+\frac{x^3}{3}+\frac{x^4}{4}+\cdots$ when $|x|$ is small. It follows that in particular when $x$ is small positive, $\ln\left(\frac{1}{1-x}\right)<2x$. Put $x=\frac{1}{p_k}$. We find by Comparison that $\sum_1^\infty \frac{1}{p_k}$ diverges. - thanks for your good idea. I think of some direct result that mainly involves that product. – Chris's wise sister Jun 11 '12 at 17:29 This isn't a direct answer to your question, but it's a nice way to convince yourself that $a_n$ diverges, so I wanted to include it. The prime number theorem implies that the $n$th prime is roughly $n\log n$, so your sum can be approximated by: $$a_n \approx\int_e^n \frac{1}{x\log x}dx = \int_e^n \frac{(1/x)}{\log x}dx = \left[\log\log x\right]_e^n = \log\log n$$ which clearly diverges. - yeah. This is very helpful and clear when one wants to prove the divergence. – Chris's wise sister Jun 11 '12 at 19:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137328863143921, "perplexity_flag": "middle"}
http://nrich.maths.org/1129/note
### Number Detective Follow the clues to find the mystery number. ### Six Is the Sum What do the digits in the number fifteen add up to? How many other numbers have digits with the same total but no zeros? ### (w)holy Numbers A church hymn book contains 700 hymns. The numbers of the hymns are displayed by combining special small single-digit boards. What is the minimum number of small boards that is needed? # All the Digits ## All the Digits This represents the multiplication of a $4$-figure number by $3$. The whole calculation uses each of the digits $0 - 9$ once and once only. The $4$-figure number contains three consecutive numbers, which are not in order. The third digit is the sum of two of the consecutive numbers. The first, third and fifth figures of the five-digit product are three consecutive numbers, again not in order. The second and fourth digits are also consecutive numbers. Can you replace the stars in the calculation with figures? A practical version of this activity is included in one of the Brain Buster Maths Boxes which contains hands-on challenges developed by members of NRICH and produced by BEAM. For more details and ordering information, please scroll down this page . ### Why do this problem? This problem requires learners to think about place value and the way that standard column multiplication works. Although the problem can be done by trial and improvement, it is solved more efficiently if worked through systematically. ### Possible approach You could start by showing the problem to the whole group and discussing what is required to do it. Do they understand what consecutive numbers are? Are they confident about the meaning of 'sum' and 'product'? After this introduction the group could work in pairs on the problem so that they are able to talk through their ideas with a partner. This sheet is intended for rough working and the solution, and this sheet gives the blank calculation and digit cards to cut out. Give the children time to make a start and then after a suitable length of time, bring the group back together to talk about how they are getting on so far. This is a good opportunity to share some initial insights. For example, some pairs may have worked out which digits must be in the four-digit number, even if they don't know the order yet. Some may have started in a different way, for example by looking for the digit which could go in the units column of the four-digit number. Draw attention to those pairs that have adopted a system in their working which means they are trying numbers in an ordered way. This means that they are guaranteed not to leave out any possibilities. You could then leave learners to continue with the problem. At the end, the whole class could discuss the steps in their reasoning and how they reached a full solution. Did they use all the information in the question right from the start? Which parts were most helpful and why? ### Key questions What could the first figure of the product be if the multiplication is by $3$? Which consecutive numbers could be in the four-digit number? Which other digit could appear in the four-digit number? ### Possible extension Challenge those pupils who finish quickly to prove to you that there is only one solution. How many solutions would there be if the clues about consecutive numbers did not hold? ### Possible support Suggest working with digit cards and possibly a mini-whiteboard. This sheet gives the blank calculation and digit cards to cut out. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499914646148682, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/45934?sort=oldest
## Proofs of Rohlin’s theorem (an oriented 4-manifold with zero signature bounds a 5-manifold) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A celebrated theorem of Rohlin states the following An oriented closed 4-manifold $M^4$ bounds an oriented 5-manifold if and only if the signature of $M^4$ is zero. Simple homological arguments based on Lefschetz duality show that the vanishing of the signature is a necessary condition. Showing that it is also sufficient is however harder. I know two proofs of this fact. Each is a variation of Rohlin's proof of the simpler 3-dimensional case, which says An oriented closed 3-manifold $M^3$ bounds an oriented 4-manifold. I was wondering if someone knows a more elementary proof, for instance based on Kirby calculus. The two proofs I know start as follows. 1. By Whitney's theorem we can embed any closed oriented n-manifold $M^n$ in $\mathbb R^{2n}$ and we can immerse it in $\mathbb R^{2n-1}$. The immersion self-intersects into circles, and by accurately surgerying $M^n$ we can eliminate these self-intersections. Surgerying changes $M^n$ via a $(n+1)$-dimensional cobordism, hence we can suppose that $M^n$ itself embeds in $\mathbb R^{2n-1}$. 2. As for knots in 3-space, any codimension-2 closed oriented manifold $M^n \subset \mathbb R^{n+2}$ bounds an oriented "Seifert" $(n+1)$-manifold $W^{n+1}$. When $n=3$ these two facts imply that every closed oriented 3-manifold bounds an oriented 4-manifold. When $n=4$ we only obain that every closed oriented 4-manifold is cobordant to a codimension-3 embedded $M^4 \subset \mathbb R^7$ and more work has to be done. • In his original proof, Rohlin shows that up to blowing up $M^4$ in some points (i.e. making connected sums with $\pm\mathbb {CP}^2$) we can suppose that $M^4$ bounds a 5-cycle in $\mathbb R^7$, which can be subsequently smoothed to an oriented 5-manifold (blow-ups are needed in both steps!). This proof is explained in A la recherche de la topologie perdue. • In Kirby's book The topology of 4-manifolds, he proves that up to cobordism the 4-manifold $M^4$ can be immersed in $\mathbb R^6$. Such an immersion has double and triple points, like a surface in $\mathbb R^3$. Triple points have signs. He proves a nice theorem which says that the number of triple points counted with sign equals (up to a factor) the first Pontryagin number, which in turns equals (up to a factor 3) the signature thanks to the Hirzebruch formula! Therefore if $M$ has signature zero we can pair double points with opposite signs and destroy them by surgery. Finally we obtain an embedded cobordant 4-manifold $M^4 \subset \mathbb R^6$. Now codimension is two and there is a "Seifert" 5-manifold bounding $M^4$. Finally, here is my question: Do you know any other proof different from these ones? For instance, a proof which does not use embeddings in Euclidean space? References are of course welcome. - 4 Rene Thom, "Quelques propriétés globales des variétés différentiables", Commentarii Mathematici Helvetici 28, page 17–86?? In this well known and fields-medal-winning paper, Thom computes the unoriented cobordism completely, and also a large chunk of oriented cobordism, including the statement that $\Omega^{SO}_{4}=\mathbb{Z}$, which immediately implies the result you are asking for. But Thom's construction relies on embedding into euclidean space as well. – Johannes Ebert Nov 14 2010 at 13:26 Thank you very much, I though Thom only considered the non-oriented case. I am curious to see which techniques he used. – Bruno Martelli Nov 15 2010 at 17:39 ## 1 Answer It depends on what you consider elementary. Gompf-Stipcisz has something like this: Morse theory gives a handle decomposition. Surger circles (the trace gives a bordism) to kill the 1-handles (this introduces 2-handles). Turn upside down and kill the 3-handles. 2 handles are attached along a framed link whose surgery gives an $S^3$ since the 4-manifold is closed. Kirby calculus says your diagram is Kirby move equivalent to the empty framed link. So do the Kirby moves, but every time you blow up a +1 also blow up a -1 (off in a corner) and note that `$CP^2 \# -CP^2$` bounds (it's the boundary of a $D^3$ bundle over $S^2$.). Handle slides are diffeos of the 4-manifold. Don't blow down, just move extra $\pm 1$ unknots aside. When you are done, your picture is a unlink with framings $\pm 1$, the same number of each since the signature is zero. This shows your manifold is bordant to a connected sum of `$CP^2\# -CP^2$`. - 1 How can one prove Kirby calculus (namely, that two diagrams of the same 3-manifold are Kirby move equivalent) without using Rohlin's theorem? Gompf-Stipcisz' proof of Kirby calculus (in page 161) makes use of Rohlin's theorem, which is stated without proof in page 341. – Bruno Martelli Nov 13 2010 at 20:50 1 Oh, you're looking for a non circular proof! I don't know. Incidentally, on that page they attribute this fact to Thom, not Rohlin. – Paul Nov 13 2010 at 22:57 @Bruno: Kirby's first published proof just used Cerf's theorem on generic 1-parameter families of smooth functions connecting two Morse functions on a manifold, didn't it? – Ryan Budney Nov 14 2010 at 6:52 @Ryan: Kirby seems to use Rohlin's theorem at the beginning of page 37 of his paper. As in Gomps-Stipcisz, given two Kirby diagrams of the same 3-manifold he builds a closed 4-manifold by gluing the two resulting 4-dimensional handlebodies, he makes some blow-ups in order to kill the signature and he then uses that the resulting 4-manifold bounds a 5-manifold. Then he applies Cerf's theory to this 5-manifold, as far as I can understand from a quick reading. – Bruno Martelli Nov 14 2010 at 11:54 3 The MCG proofs of Kirby's theorem don't use Rokhlin's theorem. See my answer here: mathoverflow.net/questions/16848/… – Daniel Moskovich Nov 14 2010 at 12:39 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232218265533447, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/155430/evaluating-int-1-infty-frac-t-t-1t2-dt/155515
# Evaluating $\int_1^{\infty} \frac{\{t\} (\{t\} - 1)}{t^2} dt$ I am interested in a proof of the following. $$\int_1^{\infty} \dfrac{\{t\} (\{t\} - 1)}{t^2} dt = \log \left(\dfrac{2 \pi}{e^2}\right)$$ where $\{t\}$ is the fractional part of $t$. I obtained a circuitous proof for the above integral. I'm curious about other ways to prove the above identity. So I thought I will post here and look at others suggestion and answers. I am particularly interested in different ways to go about proving the above. I'll hold off from posting my proof for sometime to see what all different proofs I get for this. - 5 Well, for $t$ between integers $n$ and $n+1$, the integral becomes $\int_n^{n+1} (t-n)(t-n-1)/t^2\, dt = 2 + (2n+1)(\log n - \log(n+1))$. So the problem is not really one of integration but of summing that series from $n=1$ to $\infty$, right? – Rahul Narain Jun 8 '12 at 3:46 @RahulNarain Yes. Right. – user17762 Jun 8 '12 at 3:50 @RahulNarain Just curious. What made you split the integrals from $n$ to $n+1$ and sum them up? Did you find the pattern by trying to integrate the above with mathematica by taking the upper limit as a large value (or) did the $\{t\}$ influence you to split the integral from $n$ to $n+1$ and sum it up? I am asking this question to understand how people think and go about doing a problem. – user17762 Jun 8 '12 at 4:15 2 It's simple: $\{t\}$ is a piecewise linear function of $t$, and the pieces are $[n,n+1)$. – Rahul Narain Jun 8 '12 at 4:24 2 Any time I see an integral involving a function that's piecewise well-behaved, my first instinct is to break it up into well-behaved pieces. Unfortunately, my series-fu is weak, and I don't know how to sum the series after that. Mathematica gets the right sum but it doesn't have a "show steps" button... – Rahul Narain Jun 8 '12 at 4:32 show 1 more comment ## 2 Answers The integral on $[1,N+1]$ is (see @Rahul's first comment) $$I_N=\sum_{n=1}^N\big(2+2\log n+(2n-1)\log n-(2n+1)\log(n+1)\big),$$ that is, $$I_N=2N+2\log(N!)-(2N+1)\log(N+1).$$ Thanks to Stirling's approximation, $2\log(N!)=(2N+1)\log N-2N+\log(2\pi)+o(1)$. After some simplifications, this leads to $$I_N=\log(2\pi)-(2N+1)\log(1+1/N)+o(1)=\log(2\pi)-2+o(1).$$ - +1. This was what I did. In some sense, I would like to know what are all the different ways to find the constant $\sqrt{2 \pi}$. Couple of methods, I know are based on the saddle point method and another one is a bit more elaborate computation of the asymptotic. – user17762 Jun 9 '12 at 17:59 – Chris's wise sister Jun 16 '12 at 11:52 @Chris Thanks. ${}$ – user17762 Jun 16 '12 at 23:32 Let's consider the following way involving some known results of celebre integrals with fractional parts: $$\int_1^{\infty} \dfrac{\{t\} (\{t\} - 1)}{t^2} dt = \int_1^{\infty} \dfrac{\{t\}^2}{t^2} dt - \int_1^{\infty} \dfrac{\{t\}}{t^2} dt = \int_0^1 \left\{\frac{1}{t}\right\}^2 dt- \int_0^1 \left\{\frac{1}{t}\right\} dt = (\ln(2\pi) -\gamma-1)-(1-\gamma)=\ln(2\pi)-2=\log \left(\dfrac{2 \pi}{e^2}\right).$$ REMARK: there is a theorem that establishes a way of calculating the value of the below integral for $m\geq1$: $$\int_0^1 \left\{\frac{1}{x}\right\}^m dx$$ The proof is complete. - Which book is this taken from? – Did Jun 8 '12 at 8:49 – Chris's wise sister Jun 8 '12 at 8:56 I see. Thanks. – Did Jun 8 '12 at 8:59 @did: you're welcome any time. – Chris's wise sister Jun 8 '12 at 9:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431493282318115, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/56666/is-it-possible-that-these-seriess-value-is-0
Is it possible that these series's value is $0$? $$\sum_{n=1}^{\infty}\frac{\left ( -1 \right )^n}{n^x}\cos{\left ( y\ln{n} \right )}$$ $$\sum_{n=1}^{\infty}\frac{\left ( -1 \right )^n}{n^x}\sin{\left ( y\ln{n} \right )}$$ $x$ and $y$ are arbitrary real number, and $x>0$. Question. Is it possible that these series's value are $0$? - 3 If $x+iy$ is a nontrivial zero of Riemann $\zeta$, sure... – J. M. Aug 10 '11 at 8:01 @J.M.: Thank you for the comment but I don't understand completely. could you explain with more words? – 4545454545SI Aug 10 '11 at 8:14 1 – J. M. Aug 10 '11 at 8:16 Wow, I understand that. Thank you very much. – 4545454545SI Aug 10 '11 at 8:25 1 Answer To settle this question: The two series in the question are respectively the real and imaginary parts of $-\eta(x-iy)$, where $\eta(s)$ is the Dirichlet $\eta$ function. Thus for real $x$ and $y$, if $x+iy$ is a nontrivial zero (recall that the series converge only for $x > 0$) of the Riemann $\zeta$ function, both series will be zero. Additionally, since $\eta(s)=(1-2^{1-s})\zeta(s)$, $x=1$ and $y=\frac{2\pi i k}{\ln\,2}$ with $k$ a nonzero integer would also be zeroes. For the analytically continued Dirichlet $\eta$ function, the "trivial" zeroes of Riemann $\zeta$ will also be zeroes of Dirichlet $\eta$. - 1 This is a bit incomplete. For $\mathrm{Re}(s)>0$, all zeros of $\eta(s)$ will either be a nontrivial zero of $\zeta(s)$, or will be of the form $1+2\pi i n /\ln 2, n\in\mathbb{Z}-\{0\}$. – anon Aug 11 '11 at 4:01 Oh yes, let me add that in... thanks @anon. – J. M. Aug 11 '11 at 4:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9021604061126709, "perplexity_flag": "middle"}
http://nrich.maths.org/5909/note
### Rationals Between What fractions can you find between the square roots of 56 and 58? ### Equal Equilateral Triangles Using the interactivity, can you make a regular hexagon from yellow triangles the same size as a regular hexagon made from green triangles ? # The Square Hole #### Why do this problem? The bisected equilateral triangle ($30-60-90$) is an important shape for students to become familiar with. Each of the $4$ rectangles is made from $2$ equilateral triangles and $2$ isosceles triangles. Together one of each of those two, makes the $30-60$ right-angled triangle. This problem forces consideration of the side lengths in surd form, and provides an opportunity to become familiar with manipulating surd forms. The switch from one quantity as the given unit to another, emphasises that the choice of unit is a choice for the problem solver and can be considered so as to make the relationships within a problem as clear as possible. #### Possible approach Preceding these questions with 'playtime' using cut out triangles to form patterns may be a very useful preliminary for many students. Include shapes that have holes, and suggest/invite challenges with respect to that 'hole'. And maybe include the challenge to lose the 'hole' but keep the square. #### Key questions • What is the area of the yellow equilateral triangle in terms of its side length ? • What is the relationship between the area of the yellow triangle and the area of the purple triangle ? [they are equal] #### Possible extension Impossible Triangles? is a good next step, along with the extension suggestions in Equal Equilateral Triangles #### Possible support There are a number of activities which can provide valuable auxiliary experiences for students working on this problem : Drawing first the equilateral triangle using only a straight edge and compasses, and then creating the isosceles triangle, likewise, will give a strong sense for the symmetry of each triangle and the relationship between them. Some children 'play' a long time with cut out triangles arranged on a table, motivated by the aesthetic appeal of emerging pattern possibilities, this is excellent grounding for other mathematical ideas to be built on. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926002562046051, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/55076?sort=votes
## Stratification of smooth maps from R^n to R? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm interested in stratifications of smooth maps $\mathbb{R}^n\to\mathbb{R}$ (or more generally of any $n$-manifold $M^n\to\mathbb{R}$). The codimension 0 stratum should be Morse functions, and the codimension 1 stratum should be Morse cancellations, e.g. the $t=0$ value of the following 1-parameter family of maps $$(x_1,\ldots,x_n) \mapsto tx_1 + x_1^3 \pm x_2^2 \pm\cdots\pm x_n^2 .$$ Is there a good reference for the general codimension $k$ case? Another way of phrasing the question: given a $k$-parameter family of smooth maps $F: P^k\times \mathbb{R}^n\to\mathbb{R}$, is there a known list of specific singularities such that we may assume that $F(p, \cdot)$ has only these singularities after a small perturbation? I suppose the way to start is to make $F$ Morse as a map from an $(n+k)$-manifold to $\mathbb{R}$, then look at the ways the coordinate axes of $P\times \mathbb{R}$ line up with gradients and the eigenspaces of the hessian of the Morse singularities of $F$. But I would rather cite the details than work them out for myself. If the general case is messy (instability, cross-ratios, etc.), I would also be interested in an answer for $n=2$. - ## 2 Answers It looks to me that what you are really interested in is the Thom-Boardman stratification of the function space. For that I would recommend the well-written, Stable Mappings and Their Singularities by Guillemin and Golubitsky (in the Springer GTM series). - Guillemin and Golubitsky don't quite set it up in this setting -- they develop the general Mather machine but they don't apply it to real-valued function spaces, at least, not in any real detail. – Ryan Budney Feb 10 2011 at 23:03 Granted. I guess, the other possible reference might be Cerf, at least when the codimension is small. – John Klein Feb 10 2011 at 23:29 1 Yup. Cerf starts from the Thom-Mather machine and works out the details of the singularities in the real-valued function case. Guillemin and Golubitsky is probably the most readable account of Thom-Mather theory out there, as far as I know. – Ryan Budney Feb 11 2011 at 1:25 Thanks John, I'll have a look at G&G. – Kevin Walker Feb 11 2011 at 4:15 1 A good supplement to G&G which provides an alternative understanding of the Thom-Boardman stratification is Porteous' paper springerlink.com/content/ln6331341j213515 – Sergey Melikhov Feb 11 2011 at 15:58 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A standard reference is: F. Sergeraert "Un theoreme de fonctions implicites sur certains espaces de Frechet et quelques applications," Ann. Sci. Ecole Norm. Sup. (4) 5 (1972), 599-660. This isn't a stratification of the space of maps $M \to \mathbb R$ but it is a stratification of an infinite co-dimension subspace of the space of all smooth maps $M \to \mathbb R$. It's a relatively popular stratification to use among geometric topologists, in that it produces Cerf theory. Rubinstein, Hong and McCullough use it in their work on the homotopy-type of $\operatorname{Diff}(L_{p,q})$. (which is how I learned of it) http://front.math.ucdavis.edu/0411.5016 Is this roughly what you're looking for? - Thanks -- I'll have a look at those references. The paper by J.W. Bruce in the bibliography of Rubinstein et al also looks interesting. – Kevin Walker Feb 10 2011 at 22:00 1 Ryan: you misspelled Sergeraert's name. – Thierry Zell Feb 11 2011 at 0:09 Ah, thanks for catching that. I made the same mistake on the Cerf Theory Wikipedia page (twice!). – Ryan Budney Feb 11 2011 at 1:05 1 My knowledge of French is pretty minimal, but I didn't see anything like an explicit list in chapters 8 or 9 of Sergeraert. Is the information necessary to produce an explicit list in there, or is the description of the stratification more abstract than that? – Kevin Walker Feb 11 2011 at 4:14 I think it's fairly explicit at some point though I haven't looked at it in much detail in over a year. I think the best thing to do would be to see how Rubinstein & McCullough cite the work, because they're looking for something in the ballpark of what you want. It's one of those moments in the semester where I rarely get my head out of paperwork so I won't have much more to say for a few weeks. – Ryan Budney Feb 14 2011 at 23:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9098612070083618, "perplexity_flag": "middle"}
http://en.m.wikipedia.org/wiki/Regression_model_validation
# Regression model validation Regression analysis Models Estimation Background • Regression model validation • Mean and predicted response • Errors and residuals • Goodness of fit • Studentized residual • Gauss–Markov theorem In statistics, regression model validation is the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained from regression analysis, are in fact acceptable as descriptions of the data. The validation process can involve analyzing the goodness of fit of the regression, analyzing whether the regression residuals are random, and checking whether the model's predictive performance deteriorates substantially when applied to data that were not used in model estimation. ## R2 is not enough Unfortunately, a high R2 (coefficient of determination) does not guarantee that the model fits the data well, because as Anscombe's quartet shows, a high R2 can occur in the presence of misspecification of the functional form of a relationship or in the presence of outliers that distort the true relationship. One problem with the R2 as a measure of model validity is that it can always be increased by adding more variables into the model, except in the unlikely event that the additional variables are exactly uncorrelated with the dependent variable in the data sample being used. To avoid such spurious increases of the R2, one can instead use the adjusted R2, which penalizes the use of additional explanatory variables in accordance with the amount that they are likely to spuriously increase the R2. ↑Jump back a section ## Analysis of residuals The residuals from a fitted model are the differences between the responses observed at each combination values of the explanatory variables and the corresponding prediction of the response computed using the regression function. Mathematically, the definition of the residual for the ith observation in the data set is written $e_i = y_i - f(x_i;\hat{\beta}),$ with yi denoting the ith response in the data set and xi the vector of explanatory variables, each set at the corresponding values found in the ith observation in the data set. If the model fit to the data were correct, the residuals would approximate the random errors that make the relationship between the explanatory variables and the response variable a statistical relationship. Therefore, if the residuals appear to behave randomly, it suggests that the model fits the data well. On the other hand, if non-random structure is evident in the residuals, it is a clear sign that the model fits the data poorly. The next section details the types of plots to use to test different aspects of a model and gives the correct interpretations of different results that could be observed for each type of plot. ### Graphical analysis of residuals See also: Statistical graphics A basic, though not quantitatively precise, way to check for problems that render a model inadequate is to conduct a visual examination of the residuals (the mispredictions of the data used in quantifying the model) to look for obvious deviations from randomness. If a visual examination suggests, for example, the possible presence of heteroskedasticity (a relationship between the variance of the model errors and the size of an independent variable's observations), then statistical tests can be performed to confirm or reject this hunch; if it is confirmed, different modeling procedures are called for. Different types of plots of the residuals from a fitted model provide information on the adequacy of different aspects of the model. 1. sufficiency of the functional part of the model: scatter plots of residuals versus predictors 2. non-constant variation across the data: scatter plots of residuals versus predictors; for data collected over time, also plots of residuals against time 3. drift in the errors (data collected over time): run charts of the response and errors versus time 4. independence of errors: lag plot 5. normality of errors: histogram and normal probability plot Graphical methods have an advantage over numerical methods for model validation because they readily illustrate a broad range of complex aspects of the relationship between the model and the data. ### Quantitative analysis of residuals Main article: Regression diagnostic Numerical methods also play an important role in model validation. For example, the lack-of-fit test for assessing the correctness of the functional part of the model can aid in interpreting a borderline residual plot. One common situation when numerical validation methods take precedence over graphical methods is when the number of parameters being estimated is relatively close to the size of the data set. In this situation residual plots are often difficult to interpret due to constraints on the residuals imposed by the estimation of the unknown parameters. One area in which this typically happens is in optimization applications using designed experiments. Logistic regression with binary data is another area in which graphical residual analysis can be difficult. Serial correlation of the residuals can indicate model misspecification, and can be checked for with the Durbin-Watson statistic. The problem of heteroskedasticity can be checked for in any of several ways. ↑Jump back a section ## Out-of-sample evaluation Cross-validation is the process of assessing how the results of a statistical analysis will generalize to an independent data set. If the model has been estimated over some, but not all, of the available data, then the model using the estimated parameters can be used to predict the held-back data. If, for example, the out-of-sample mean squared error, also known as the mean squared prediction error, is substantially higher than the in-sample mean square error, this is a sign of deficiency in the model. ↑Jump back a section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8931242823600769, "perplexity_flag": "head"}
http://mathoverflow.net/questions/57941?sort=newest
Pull-backs which are push-outs Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider a commutative square in a category $\mathcal{C}$ $$\begin{array}{ccc} A&\rightarrow&B\\ \downarrow&&\downarrow\\ C&\rightarrow&D \end{array}$$ Suppose $\mathcal{C}$ is abelian. If this square is a pull-back and $B\rightarrow D$ or $C\rightarrow D$ is an epimorphism, then this square is also a push-out square. Dually, if this square is a push-out and $A\rightarrow B$ or $A\rightarrow C$ is a monomorphism, then this square is also a pull-back square. ¿Are there more general kinds of categories were such things happen? - Try to look for effective epimorphisms. – Martin Brandenburg Mar 9 2011 at 12:16 1 Answer Yes! Pretoposes (and in particular toposes) also have this property. It is a remarkable fact that pretoposes (which you can think of as having the first-order exactness properties of toposes or $Set$-like categories) have "most" of the same exactness properties as abelian categories (see below). In fact, this is the beginning of a remarkable set of observations due to Peter Freyd, and expounded by him in a discussion at the categories mailing list, which led to a sharp distinction between pretoposes and abelian categories as concentrated particularly in the behavior of the initial object. (In an abelian category, $A \times 0 \cong A$, whereas in a pretopos $A \times 0 \cong 0$. But this is practically the only essential difference.) In fact, Freyd showed that abelian categories and pretoposes are special cases of what he dubbed "AT categories", which contain the core exactness properties which are common to abelian categories and pretoposes. AT categories cut so close to the essence of each of these two special cases that in fact every AT category splits cleanly as a product of an abelian category and a pretopos! I wrote up my own account of this in the nLab, here. - The question could almost have been written as a deliberate feedline to give this wonderful answer a home on MO! – Peter LeFanu Lumsdaine Mar 9 2011 at 14:04 Heh, thanks Peter! I was delighted to have this question fall into my lap, as it were. The idea of AT category is very cool, but I'm not aware that much has been done with it since those discussions between Freyd and Pratt. – Todd Trimble Mar 9 2011 at 15:09 2 One might argue that the idea of AT category is so neat that it destroys itself as an object to be studied: since any AT category splits into an abelian category and a pretopos, why study AT categories rather than just abelian categories and pretopoi separately? – Mike Shulman Mar 9 2011 at 15:58 I've often wondered about that too, Mike. I don't have an answer to that. – Todd Trimble Mar 9 2011 at 16:16 Hi Todd, thanks for your answer. I've had a look at your links and I've found that pretoposes have the second property I mentioned. Do they also have the first one? – Fernando Muro Mar 10 2011 at 15:02 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9594521522521973, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/157246/trip-around-the-earth/158047
# Trip around the Earth You are standing at an airport (that lies somewhere) on equator (of the earth) and have an unlimited number of identical aircrafts (same model, make and fuel capacity etc.) to make a complete trip of equator. Each of the aircraft has the fuel capacity to fly exactly 1/3 (one-third) around the earth along equator. Any flying plane can transfer fuel to any other plane in air instantaneously (without spillage) but after transfer, it should have left with sufficient fuel to come back to the airport (or get refuel again by someone else to eventually come to airport). In any case no plane should get crashed due to fuel outage. Q(1). What is the minimum number of aircraft necessary to get one plane all the way around the equator assuming that all of the aircraft must return safely to the airport? Assume no other airport is available and unlimited supply of fuel is available (at the airport). Q(2). What is the minimum number of aircraft necessary for a straight line to reach one of them from start to end point (assuming airport at the start point) and the fuel capacity is 1/3 (one-third) of the straight line (and other conditioned are same as that of Q1). I can solve it using head and trial but I am looking for a mathematically way to derive the solution. - 1 This is another incantation of the jeep problem - this is popular today! It was asked in a different form earlier. At least this version has a more interesting solution. – Michael Boratko Jun 12 '12 at 5:24 jeep problem is equivalent to n fleet problem but there the condition was that we can left aeroplanes in the middle just one plane should reach the destination point .... but here we can't left them in the middle... we have to bring them back to the airport safely. – Ravi Gupta Jun 14 '12 at 8:53 ## 2 Answers There is a symmetric solution with 11 aircrafts. If you manage to carry a plane with 5 assisting planes up to one third of the equator and leave it with its tank full, then let it fly through the middle third, and redo the same strategy in reverse with 5 other assisting planes, you get a solution. Solving this kind of problem is hard, because you constantly have to maintain a delicate balance between how much you can support the aircraft while still being able to save the assisting aircrafts who in turn will need support that will have to be saved etc. I kinda brute-forced over strategies running with time intervals of 1/6 to find this one (my first try needed a humble 20 aircrafts) Here, I show how, with 6 aircrafts (flying at one unit of distance per unit of time and unit of fuel, where each aircraft has tank that can contain up to 6 units of fuel), you can send one of them to a distance of 12 units while still not crashing the other five. A pair $(p,f)$ denotes the number of planes $p$ with their combined fuel $f$ at that time and place. I don't detail explicitly how the fuel is distributed but you can infer it from the table easily. $$\begin{array}c & d=0 & d=1 & d=2 & d=3 & d=4 & d=5 & d=6 & d=7\\ t=0 & (6,36) \\ t=1& & (6,30) \\ t=2 & (1,6) & & (5,24) \\ t=3 & & (2,5) & & (4,19) \\ t=4 & (2,12) & & (1,1) & & (3,14) \\ t=5 & & (3,10) & & (1,1) & & (2,10) \\ t=6 & (2,12) & & (2,5) & & & & (2,8) \\ t=7 & & (3,10) & & (1,3) & & (1,1) & & (1,5) \\ t=8 & (2,12) & & (1,5) & & (2,2)\\ t=9 & & (2,10) & & (3,4)\\ t=10 & (1,6) & & (4,6) \\ t=11 & & (5,7) \\ t=12 & (5,30) \\ \end{array}$$ Starting the solution in reverse at t=6 allows you to catch the aircraft when it reaches d=12 at time t=12, and finally to bring it at d=18 = 3 times the distance an aircraft can fly by itself. Seeing as there is some fuel to spare sometimes and as I have almost solutions to doing the trip with 4+4+1 planes, I wouldn't be too surprised if there is a solution with only 10 aircrafts (and maybe 9). It would be nice to know, for each $n$, the greatest distance you can possibly reach with $n$ aircrafts. Theoretically, you could do an A* search algorithm with hyperpolyedra in a phase space of dimension $2n$ to explore everything and obtain the solution after a horrifyingly long computation. You could also write particular flight routes, deciding which aircraft meets which aircraft where and in what order, then leave unknowns for all the distances and the fuel exchanged at each meeting point, and try to solve the horrifying system of linear inequalities you obtain and get the best distance with that particular flight route. Looking at the near solutions one can brute force with 5 aircrafts, this might be doable to find the best distance for $n=5$. - From the $d=2, t=2$ point, you need to send one airplane home with $2$ units of fuel, so it seems you only have $18$ at $d=3, t=3$. Similarly, from $d=6, t=6$ you need $(2,12)$ so one can continue and one can return. – Ross Millikan Jun 14 '12 at 3:03 @Ross: At $d=2, t=2$ the plane going home only needs $1$ unit as it is about to be met by the spare plane at $d=1,t=3$ – Henry Jun 14 '12 at 8:21 Thanks. I hadn't figured that out. – Ross Millikan Jun 14 '12 at 12:36 @mercio: why to use new planes after covering the 2/3 of the total distance.... we can reuse the 5 planes used in the beginning.... hence total 6 plane? – Ravi Gupta Jun 14 '12 at 16:33 @RaviGupta : You have to start the reverse procedure at time t=6, so in this particular solution, you have all 11 planes busy for t in [6;12]. But you are right that we could reuse some of the planes at least for t in [0;6] and [12;18]. Though so far I found that they couldn't help very much, I think it's very likely that such better solutions exist out there. – mercio Jun 14 '12 at 16:59 Considering each aircraft has a storage capacity to travel 1/3 the distance, we need 17 aircrafts with full storage and another one with 9.3% capacity at the start point. I do not think it makes any difference whether it is on the equator or on a straight line with the same distance. - You might explain how you are going to deploy these aircraft. – Henry Jun 14 '12 at 0:12 You might also consider whether some of them can provide fuel for the departure part of the circle and later for the arrival part. – Henry Jun 14 '12 at 0:12 @Henry: This is an answer to Q(2) then. I agree, it would be nice to see the details. mercio has provided an answer to Q(1) (that I disagree with) since your comment. – Ross Millikan Jun 14 '12 at 2:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.95746248960495, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/48343/a-question-about-reduced-p-groups
# A question about reduced p-groups I need help with an exercise from Kaplansky's Infinite Abelian Groups (Section 9, Exercise 27). He states the problem as follows: Let $G$ be a reduced primary group which is not of bounded order. Prove that $G$ has cyclic direct summands of arbitrarily high order. This also is an exercise in Fuchs' Infinite Abelian Groups (Section 27, Exercise 1). - – Jack Schmidt Jun 29 '11 at 0:15 I've merged your accounts (by the way, flagging a post for moderator attention is indeed the right way of going about it). – Zev Chonoles♦ Jun 29 '11 at 7:31 @Zev Chonoles: Thank you. – Anononym Jun 29 '11 at 8:31 ## 1 Answer Let $A$ be such a group, and let $B_i = \{a\in A\ |\ |a|=p\text{ and } h(a)=i\}$. Note that at least one of the $B_i$ is non-empty by Lemma 8 in section 9. Also, if there existed an $N$ such that for all $m>N$, $B_m$ was empty, then for all $a\in A$, we would have $p^{N+1}a=0$, so $A$ would have bounded order. Thus infinitely many of the $B_i$ are non-empty; now simply mimic the proof of Theorem 9, using Lemma 7 and Theorem 7. EDIT - Sorry, I left out a couple details, which I don't mind filling in. First, $h(a)$ means the height of $a$. Second, if such an $N$ existed as above, then every element of order $p$ in $p^{N+1}A$ would have infinite height in $A$. It is easy to see this implies it has infinite height in $p^{N+1}A$. Thus $p^{N+1}A$ is divisible; since $A$ is reduced, it is $0$. - Awesome, thanks for the quick answer! – Anononym Jun 29 '11 at 0:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543200731277466, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/11213/list
## Return to Answer 2 added 184 characters in body Hi There, It would help if you gave me some examples of actual positive ternary forms with specific linear dependencies. The main source of dependencies is the Siegel representation formula, which calculates the weighted average of representation numbers in terms of a product of local densities.'' In practice, what this means is that by restricting the target number n to some appropriate arithmetic progression and relating representations by two genera one may get an explicit linear dependence among representation counts. Very much in this spirit is the viewpoint of Jones, given for example in a 1999 paper by Ono and Soundararajan called "Integers Represented by Ternary Quadratic Forms" where they point out that the number of essentially distinct representations of an (eligible) number $N$ by $x^2 + y^2 + 10 z^2$ is just $h( -40 N) / 4$ when $N$ and 10 are coprime. However, the main thing that would surprise me is linear dependence for all n among primitive forms. For instance, Schiemann showed that no two positive ternary forms (inequivalent) have the same theta series. I'm not sure this facility allows chats back and forth, if you want to try emailing me in person get my address from http://www.ams.org/cml, and for that matter google me as "Will Jagy" in double quotes. William C. Jagy JULY: follow-up email sent. I posted something here, JSE felt it might be too revealing. I did not think so, but there is little harm in deleting it and sending you email instead. 1 Hi There, It would help if you gave me some examples of actual positive ternary forms with specific linear dependencies. The main source of dependencies is the Siegel representation formula, which calculates the weighted average of representation numbers in terms of a product of local densities.'' In practice, what this means is that by restricting the target number n to some appropriate arithmetic progression and relating representations by two genera one may get an explicit linear dependence among representation counts. Very much in this spirit is the viewpoint of Jones, given for example in a 1999 paper by Ono and Soundararajan called "Integers Represented by Ternary Quadratic Forms" where they point out that the number of essentially distinct representations of an (eligible) number $N$ by $x^2 + y^2 + 10 z^2$ is just $h( -40 N) / 4$ when $N$ and 10 are coprime. However, the main thing that would surprise me is linear dependence for all n among primitive forms. For instance, Schiemann showed that no two positive ternary forms (inequivalent) have the same theta series. I'm not sure this facility allows chats back and forth, if you want to try emailing me in person get my address from http://www.ams.org/cml, and for that matter google me as "Will Jagy" in double quotes. William C. Jagy
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418694972991943, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/155942-finding-area-3d-triangle.html
# Thread: 1. ## Finding area of 3D triangle As you've probably guessed, I've started multi-variable calculus. I'm struggling a bit to wrap my head around it. I'm sure this is an easy question, but I don't know where to begin. Find the area of the triangle with vertices at A(1,1,1), B(2,3,4), C(5,2,6) Any help is appreciated. 2. Originally Posted by maxreality Find the area of the triangle with vertices at A(1,1,1), B(2,3,4), C(5,2,6). Hint: The area is equal to $\tfrac12AB\cdot AC\cdot\sin A$, which is half the length of the cross product vector $\vec{AB}\times\vec{AC}$. 3. Got it! Terribly easy once I worked through it. I think I'm trying to visualize and look to deep when the answer isn't as complicated as I'm making it. Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9655815958976746, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/110304?sort=votes
Generalization of Vogt’s Theorem for curves in higher dimension Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) 1. The Vogt's theorem for plane curves states that if A and B are endpoints of a spiral arc, the curvature nondecreasing from A to B. The angle $\beta$ of the tangent to the arc at B with the chord AB is not less than the angle $\alpha$ of the tangent at A with AB. $\alpha = \beta$ only if the curvature is constant. 2. Does anyone know of a result which extend this theorem to space curves or curves in higher dimension. I have the following conjecture for space curves: Given a regular curve in space $\gamma : [0, l] \rightarrow \mathbb{R}^3$, parametrized by arc-length $s$, let $\kappa$ and $\tau$ denote the Euclidean curvature and torsion respectively. Let us assume that $\kappa$ is non-decreasing and $\tau$ is non-decreasing. Let $A = \gamma(0)$ and $B = \gamma(l)$ and let $\alpha$ be the angle between the tangent plane at $\gamma(0)$ and the chord $AB$ and let $\beta$ be the angle between the tangent plane at $\gamma(l)$ and the chord $AB$. We claim that $\alpha \leq \beta$ and equality holds only if $\gamma$ is a circular helix. - 1 Answer This is an attempt at an (as yet incomplete) proof of the above claim. I would be happy to receive any corrections and/or comments. Let us denote the curve $\gamma$ by the following parametrization $\gamma(s) = (x_1(s), x_2(s), x_3(s))$. Without loss of generality let us assume that $A = \gamma(0) = (0, 0, 0)$ and $B = \gamma(l) = (x_1(l), 0, 0)$. Let $\theta(s)$ denote the angle between tangent plane at $\gamma(s)$ and the chord $AB$, thus we have that: $\sin \theta(s) = \langle B(s), (1,0,0) \rangle$. Using Frenet-Serret formulae where : $T'(s) = \kappa(s) N(s)$ and $N'(s) = -\kappa(s) T(s) -\tau(s) B(s)$ we have that : $\sin \theta(s) = \langle B(s), (1,0,0) \rangle = \frac{1}{\kappa(s)} (\gamma'(s) \times \gamma''(s)) = \frac{1}{\kappa(s)} (x_2'(s) x_3''(s) - x_3'(s) x_2''(s))$. Claim : $\alpha \leq \beta$ i.e., it is enough to prove that $\int_{\theta(0)}^{\theta(l)} \sin \theta d\theta \geq 0$. From the equation for $\sin \theta(s)$ we obtain that $d\theta(s) = \frac{\kappa(s) f'(s) - f(s) \kappa'(s)}{\kappa(s)\sqrt{\kappa(s)^2-f(s)^2}}$, where $f(s) := x_2'(s) x_3''(s)-x_2''(s) x_3'(s) = \kappa(s) \langle B(s), e_1 \rangle$ and $e_1:= (1,0,0)$. On further simplification using Frenet-Serret formulae we get: $\int_{\theta(0)}^{\theta(l)} \sin \theta(s) d\theta(s) = \int_0^l \frac{\tau(s)}{\kappa(s)} \frac{\langle B(s), e_1 \rangle \langle N(s), e_1 \rangle}{\sqrt{1-\langle B(s), e_1\rangle ^2}} ds$. From here it is not clear to me that product of the integrand is always positive, or using integration by parts the integrand is always positive. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222919940948486, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/42358-how-differientiate-these-polynomial-functions.html
# Thread: 1. ## How to differientiate these Polynomial functions Hi I've been stuck with this question for quite a while. School's been over for a week and I have been trying to solve this problem over and over and over again and I failed to get the answers to these following questions. If anybody who can help me please do help I'm sure somebody out there will benefit from my response as well. 1. Let P(a,b) be a point on the curve √x + √y = 1. Show that the slope of the tangent at P is -√b ------ √a 2. For the power functions f(x) = x^n, find the x-intercept of the tangent to its graph at point (1,1). What happens to the x-intercept as n increases without bound (n --> +OO (positive infinity) ) ? Explain the result geometrically 3. For each function, sketch the graph of y=f(x) and find an expresssion for f ' (x). Indicate any points at which the f ' (x) does not exist. a) f(x) = [ x^2, x < 3 [ x + 6, x >-(x is greater or equal then 3) 3 ] b) f(x) = I3x^2 - 6I I means absolute value bars c) f(x) = I IxI - 1 I I means absolute value bars That is all. I'd greatly appreciate it. Thanks. 2. Originally Posted by ghostanime2001 Hi I've been stuck with this question for quite a while. School's been over for a week and I have been trying to solve this problem over and over and over again and I failed to get the answers to these following questions. If anybody who can help me please do help I'm sure somebody out there will benefit from my response as well. 1. Let P(a,b) be a point on the curve √x + √y = 1. Show that the slope of the tangent at P is -√b ------ √a Mr F says: Are you familiar with implicit differentiation? 2. For the power functions f(x) = x^n, find the x-intercept of the tangent to its graph at point (1,1). What happens to the x-intercept as n increases without bound (n --> +OO (positive infinity) ) ? Explain the result geometrically Mr F says: Where are you stuck? ${\color{red} f'(x) = n x^{n-1} \Rightarrow m = n \, \text{at} \, x = 1}$. Then the equation of the tangent is ${\color{red} y - 1 = n(x - 1) \Rightarrow y = ......}$. 3. For each function, sketch the graph of y=f(x) and find an expresssion for f ' (x). Indicate any points at which the f ' (x) does not exist. a) f(x) = [ x^2, x < 3 [ x + 6, x >-(x is greater or equal then 3) 3 ] b) f(x) = I3x^2 - 6I I means absolute value bars c) f(x) = I IxI - 1 I I means absolute value bars That is all. I'd greatly appreciate it. Thanks. 3. Have you drawn the graphs? f'(x) does not exist at 'jumps' and at 'pointy bits' (for the highbrow, jump discontinuities and salient points). 3. yes i am familiar with implicit differentiation but i wanna know if there is another different method regardless if its easier or harder i want know and btw this is my intention was of solving this problem x^1/2 + y^1/2 = 1 y^1/2 = 1 - x^1/2 y = (1 - x^1/2)^2 y = (1 - x^1/2)(1 - x^1/2) y = 1 + 2x^1/2 + x y' = 2(1/2)x^-1/2 + 1 y' = x^-1/2 + 1 But then here is when i get stuck i got the slope y' = x^-1/2 + 1 but how do i show it being equal to -√b ----- √a Thanks again 4. Originally Posted by ghostanime2001 yes i am familiar with implicit differentiation but i wanna know if there is another different method regardless if its easier or harder i want know and btw this is my intention was of solving this problem x^1/2 + y^1/2 = 1 y^1/2 = 1 - x^1/2 y = (1 - x^1/2)^2 y = (1 - x^1/2)(1 - x^1/2) y = 1 + 2x^1/2 + x y' = 2(1/2)x^-1/2 + 1 y' = x^-1/2 + 1 But then here is when i get stuck i got the slope y' = x^-1/2 + 1 but how do i show it being equal to -√b ----- √a Thanks again Note that $\sqrt{b} = 1 - \sqrt{a}$. Therefore $- \frac{\sqrt{b}}{\sqrt{a}} = ~ ...... ~ = 1 - \frac{1}{\sqrt{a}}$ ........ Capisce? 5. uhh.... im still not understanding this because we got this homework before we got into implicit differentation so thats why im asking how show it being equal to that without using implicit differention and why when i put the equation into implicit form then try to differentiate it I cannot show it being equal to that - sqrt(b)/sqrt(a) 6. Originally Posted by ghostanime2001 uhh.... im still not understanding this because we got this homework before we got into implicit differentation so thats why im asking how show it being equal to that without using implicit differention and why when i put the equation into implicit form then try to differentiate it I cannot show it being equal to that - sqrt(b)/sqrt(a) I have already shown you how! Working backwards from the result given in the question: Note that . Therefore From your own work: x^1/2 + y^1/2 = 1 y^1/2 = 1 - x^1/2 y = (1 - x^1/2)^2 y = (1 - x^1/2)(1 - x^1/2) y = 1 - 2x^1/2 + x Note the mistake you made in the expansion. y' = - 2(1/2)x^-1/2 + 1 y' = - x^-1/2 + 1 Therefore at x = a: y' = - a^-1/2 + 1 Clearly the required result is now shown. 7. I still dont see where you are going with this Why is y' = - a^-1/2 + 1 = -sqrt(b)/sqrt(a) Just HOW ? are there little methods to show the left side is equal to the right side ??? if i want to show the two are equal. Just bluntly stating the two are equal doesnt prove anything. I want to actually see they are equal not just putting the equal sign in between them and concluding they are equal. No. 8. Originally Posted by ghostanime2001 I still dont see where you are going with this Why is y' = - a^-1/2 + 1 = -sqrt(b)/sqrt(a) Just HOW ? are there little methods to show the left side is equal to the right side ??? if i want to show the two are equal. Just bluntly stating the two are equal doesnt prove anything. I want to actually see they are equal not just putting the equal sign in between them and concluding they are equal. No. You are told that to show that $y' = -\frac{\sqrt{b}}{\sqrt{a}}$. This expression is the same as $- \frac{(1 - \sqrt{a})}{\sqrt{a}} = - \frac{1}{\sqrt{a}} + 1$. This is because you are told that $\sqrt{x} + \sqrt{y} = 1$ and you are given the point (a, b). Therefore $\sqrt{a} + \sqrt{b} = 1 \Rightarrow \sqrt{b} = 1 - \sqrt{a}$. You have done a calculation and found (after my corrections) that $y' = - \frac{1}{\sqrt{x}} + 1$. When you substitute x = a into this you get $- \frac{1}{\sqrt{a}} + 1$. I have not just bluntly stated that the two are equal. They are equal via a clear and logical process. If you still don't get it, then I'm sorry but I'm done here ....... maybe someone else has the time and inclination. 9. Originally Posted by ghostanime2001 Hi I've been stuck with this question for quite a while. School's been over for a week and I have been trying to solve this problem over and over and over again and I failed to get the answers to these following questions. If anybody who can help me please do help I'm sure somebody out there will benefit from my response as well. 1. Let P(a,b) be a point on the curve √x + √y = 1. Show that the slope of the tangent at P is -√b ------ √a 2. For the power functions f(x) = x^n, find the x-intercept of the tangent to its graph at point (1,1). What happens to the x-intercept as n increases without bound (n --> +OO (positive infinity) ) ? Explain the result geometrically 3. For each function, sketch the graph of y=f(x) and find an expresssion for f ' (x). Indicate any points at which the f ' (x) does not exist. a) f(x) = [ x^2, x < 3 [ x + 6, x >-(x is greater or equal then 3) 3 ] b) f(x) = I3x^2 - 6I I means absolute value bars c) f(x) = I IxI - 1 I I means absolute value bars That is all. I'd greatly appreciate it. Thanks. For the first one we have two choices $\sqrt{x}+\sqrt{y}=1\Rightarrow{y=(1-\sqrt{x})^2}$ Now just differentiate, or you can do this $\frac{1}{2\sqrt{x}}+\frac{y'}{2\sqrt{y}}=0$ Now plug in x and y and solve for y' 10. If i differentiate using your first way Mathstud28 y = (1- √x)² = (1- √x)(1- √x) = 1 - 2√x + x f'(x) = -2(1/2)x^-½ + 1 = -x^-½ + 1 = -1/√x + 1 substituting x->a and y'->? i get this ? = -1/√a + 1 How do i from here show the slope of tha tangent at that √x + √y = 1 at P(a,b) is -√b ----- √a The thing to remember what im sayin here is that this worksheet i got in school, was given before the class learned implicit differentiation. So That's my problem here how do i prove it equals that without having to even think about implicit differentiation, if i was a currently enrolled calc student in class that didnt know what implicit differentiation meant. Thanks again 11. Originally Posted by ghostanime2001 If i differentiate using your first way Mathstud28 y = (1- √x)² = (1- √x)(1- √x) = 1 - 2√x + x f'(x) = -2(1/2)x^-½ + 1 = -x^-½ + 1 = -1/√x + 1 substituting x->a and y'->? i get this ? = -1/√a + 1 How do i from here show the slope of tha tangent at that √x + √y = 1 at P(a,b) is -√b ----- √a The thing to remember what im sayin here is that this worksheet i got in school, was given before the class learned implicit differentiation. So That's my problem here how do i prove it equals that without having to even think about implicit differentiation, if i was a currently enrolled calc student in class that didnt know what implicit differentiation meant. Thanks again Read post #8 again. 12. So im just trying to prove Left side equals the right side then to show the slope ??? 13. Originally Posted by ghostanime2001 So im just trying to prove Left side equals the right side then to show the slope ??? Yes. 14. Originally Posted by ghostanime2001 So im just trying to prove Left side equals the right side then to show the slope ??? i am sorry. i do not understand your question, nor do i see where you're getting stuck. perhaps you can be more clear on what is confusing you. we are still talking about question 1, right? we have shown that the slope is $1 - \frac 1{\sqrt{a}}$, by solving for y and differentiating regularly. i think you followed that ok, right? now, the problem asked that we must prove that the slope is $- \frac {\sqrt{b}}{\sqrt{a}}$. thus, it remains to show that $- \frac {\sqrt{b}}{\sqrt{a}} = 1 - \frac 1{\sqrt{a}}$. this is what Mr F did in post #8. the complete solution is there. we know the point (a, b) is on the curve (that is, when x = a, y = b). so, since the curve is $\sqrt{x} + \sqrt{y} = 1$ we can plug in x = a and y = b, and it will be on the curve. thus we get $\sqrt{a} + \sqrt{b} = 1$. solving for $\sqrt{b}$, we get, $\sqrt{b} = 1 - \sqrt{a}$ ok, now back to what we know the slope is. we know it is $1 - \frac 1{\sqrt{a}}$. now for some algebraic manipulation $\underbrace{1 - \frac 1{\sqrt{a}}}_{\text{our slope}} = \frac {\sqrt{a} - 1}{\sqrt{a}}$ ..............i just added the fractions ............ $= \frac {-( {\color{red} 1 - \sqrt{a}})}{\sqrt{a}}$ ........... what's in red is exactly what $\sqrt{b}$ is! how lucky! ............ $= - \frac {{\color{red}\sqrt{b}}}{\sqrt{a}}$ as was to be shown. 15. HOly **** !?!?!?!?!?!??!?! You almost finished my quest though T_T But just one more question is it possible to take out the negative sign out before getting to that step so the approach to the answer is more clear and logical or is that it ?? Btw... how do u guys get that text i want to like make my own online source of solving homework questions and post it on the web so its a whole lesson from what i've learned posted on the net for the rest of the world. I'm just wondering cuz that text you guys wrote, all those mathematical symbol is so clear and nice i wanna know if you can do that in Word or something or possibly download a program that types that text. Thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454973936080933, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/79488?sort=newest
Is there a “knot theory” for graphs? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I think knot theory has been studied for quite a while (like a century or so), so I'm just wondering whether there is a "knot theory" for graphs, i.e. the study of (topological properties of) embeddings of graphs into R^3 or S^3. If yes, can anyone show me any reference? If the answer is basically no, then why? Is it just too hard, uninteresting, or can it be essentially reduced to the study of knots (and links)? - check out: ams.org/mathscinet-getitem?mr=1781912 – Agol Oct 30 2011 at 2:26 You might be interested in the last part of my answer to this question : mathoverflow.net/questions/39650/… – Andy Putman Oct 30 2011 at 2:40 7 Answers Just as Ryan Budney pointed out, instead of ambient isotopy one may consider a weaker equivalence relation on spatial graphs, namely the one generated by isotopy and IH-moves (also known as Whitehead moves). With this definition of equivalence, two knotted graphs are equivalent if and only if they admit isotopic regular neighbourhoods. This equivalence relation has been already considered, for example, by Kinoshita in 1958, and it was named ''neighbourhood equivalence'' for obvious reasons. Of course, the study of graphs up to neighbourhood equivalence reduces to the study of knotted handlebodies. There exist several invariants of knotted handlebodies. Among them, I have recently become interested in the quandle coloring invariants defined by Ishii in his paper Moves and invariants for knotted handlebodies Algebraic & Geometric Topology 8 (2008) 1403–1418 In a joint paper with R. Benedetti "Levels of knotting of spatial handlebodies" http://arxiv.org/abs/1101.2151 we have exploited (among other things, like the Alexander invariants of the complement) these quandle coloring invariants in order to distinguish different levels of knotting for handlebodies. Just as in the case of knot theory, a good invariant for a knotted handlebody is its complement. However, while Gordon-Luecke's Theorem ensures that a knot is determined by its complement, there exist inequivalent handlebodies whose complements are homeomorphic (this is one of the reasons why I would compare the theory of knotted handlebodies of genus g with the theory of links with g components, rather than with knot theory). On the other hand, Kent and Souto exhibited here http://arxiv.org/abs/0904.2332 a spatial handlebody whose complement admits a unique embedding in the 3-sphere up to isotopy. Also observe that, due to Fox's reimbedding Theorem, every compact submanifold of the 3-sphere admits a reimbedding as the complement of a finite union of handlebodies in the 3-sphere itself. Therefore, a complete understanding of knotted handlebodies should provide an understanding of spatial domains in general. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The theory of (un)knotted graphs also contributes to knot theory. For example, the theory of tunnel number one knots can be thought of as the theory of embedded theta graphs with a distinguished edge (the tunnel). The operation of band summing two knots (or more generally any rational tangle replacement) can be studied by examining an eyeglasses graph with the separating edge the core of the band. There is a relatively nice interplay between such graphs and other 3-manifold theories like thin position and sutured manifold theory. - whether there is a "knot theory" for graphs... or can it be essentially reduced to the study of knots (and links)? As Dror Bar-Natan points out in his interesting answer, it can, "if you totally understand the theory of tangles". If you don't, but you're very generous as to what amounts to a reduction, then it "almost can" (up to about one integer invariant) by a theorem of Roberston, Seymour and Thomas: two knotless, linkless embeddings $f,g$ of a graph $G$ in $\Bbb R^3$ are equivalent (by an isotopy of $\Bbb R^3$) if and only if the restictions of $f$ and $g$ to every subgraph of $G$, homeomorphic to $K_5$ or $K_{3,3}$ are equivalent. Here "knotless" means that every cycle (a subgraph homeomorphic to $S^1$) in $G$ is unknotted, and "linkless" means that every two disjoint subgraphs are separated by an embedded $2$-sphere. To be precise, Robertson, Seymour and Thomas had a slightly different formulation (with "panelled" in place of "knotless and linkless") and the above version is proved in http://arxiv.org/abs/math/0612082. What is the "about one integer invariant"? As Ryan Budney points out in his interesting answer, it helps to study graphs up to weaker equivalence relation than ambient equivalence or non-ambient isotopy (which, incidentally, already kills all local knots). Taniyama (Topol. Appl. 65 (1995), 205-228) has shown that two embeddings of a graph $G$ in $\Bbb R^3$ are "homologous" (=cobound an embedded $G\times I$+(handles) in $\Bbb R^3\times I$, where each handle is a torus attached by a tube to a $2$-cell, (edge)$\times I$) if and only if they have the same Wu invariant (this integer invariant is really just the $1$-parameter version of the van Kampen obstruction). On the other hand, Shinjo and Taniyama (Topol. Appl. 134 (2003), 53-67) have shown that the vanishing of the Wu invariant of a graph is determined by the vanishing of its restriction to subgraphs homeomorphic to $K_5$, $K_{3,3}$ and $S^1\sqcup S^1$. Another interesting relation on embedded graphs in link homotopy, i.e. arbitrary self-intersections of connected components are allowed, but distinct components may not intersect. The link homotopy classification of embeddings in $\Bbb R^3$ of a disjoint union of two $S^1$'s and a wedge of $S^1$ is already pretty nontrivial. - The theory of knotted trees is obviously trivial. So given a knotted graph $\Gamma$, take a maximal tree in it and you can bring it to a standard form, say to be embedded as a planar object inside a tiny disk that is disjoint from the rest of the knotted graph; which is just the finitely many arcs that make the complement of the tree. But now you can draw $\Gamma$ in the plane so that "everything interesting" (namely, the complement of the tree) is outside of a small disk. Do inversion, and you have a fixed tree outside the disk and a tangle inside it. (Some details depend on whether your vertices are rigid or not, or "thickened" or not, but the conclusion is always more or less the same). This correspondence between knotted graphs and tangles is not canonical - it depends on the (combinatorial) choice of a maximal tree, and modifying that choice modifies the resulting tangle (in simple ways that will not be stated here). So topologically speaking, "knotted graphs" are not interesting. They are merely tangles, along with a bit of further combinatorial data (mostly the tree). If you totally understand the theory of tangles (modulo some simple to state actions, which also depend on what rigidity assumptions are made for the vertices), you'd totally understand knotted graphs. Yet there's lot's of beautiful information in the interaction between the combinatorics of the graph and the topology of the tangle. For example, see my recent paper with Zsuzsanna Dancso, arXiv:1103.1896, in which we study the relationship between knotted trivalent graphs and Drinfel'd associators. - In principle, there is an algorithm to tell if two graphs in $\mathbb{R}^3$ are isotopic, using Waldhausen's method of recognizing Haken 3-manifolds. The complement of a graph (obtained by removing an open regular neighborhood) has a natural pared manifold structure (also keeping track of meridians and longitudes on closed loop components). The pared manifold just means you have a collection of annuli in the boundary, and these annuli come from the regular neighborhoods of the edges of the graph. Waldhausen's theorem may be extended to determine the homeomorphism problem for pared manifolds - although it is not explicitly stated in this form, his method makes use of a more general concept of manifolds with boundary pattern, of which pared manifolds are a special case. It's not hard to see that two graphs are isotopic if and only if their corresponding pared manifolds are equivalent. However, this algorithm has not been fully implemented by computer. One practical method is to use the program Orb. This allows you to input a graph using a mouse, similar to Snappea/SnapPy. If the graph complement is hyperbolic (in an appropriate sense, where the pared locus corresponds to rank one cusps, and the complementary regions corresponding to vertices of the graph are totally geodesic), then Orb will allow you to tell if two graph complements are isotopic (if it doesn't crash!). There is a relative JSJ decomposition, which allows one to break up a pared manifold into hyperbolic and Seifert pieces (such as the graph generalization of connect sum), but this has not been implemented as far as I know. - Yes, there are many such results. Conway-Gordon, Sachs in the 80s proved that any map $K_6 \to R^3$ contains two disjoint linked traingles. Robertson-Seymour-Thomas proved found the family of minors that characterizes such property. Lovasz-Schrijver proved that this is equivalent to having Colin de Verderie invariant larger than 4 and the projection on the null space of the Colin de Verderie matrix is a linkless embedding (in the case the null space is of dimension four or less, I forget if this is a theorem or a conjecture?) There are many papers saying things like, for your favorite Link invariant there is a numnber $n$, such that for any embedding $K_n \to R^3$ one can find a link with nontrivial your favorite invariant. I don't remember the references now, maybe google "ramsey theory for links" or something like that. ($K_n$ is the complete graph on $n$ vertices). From a more geometrical point of view, here are two things you can do: One is to look at metric properties. For this look up Kolmogorov-Borodin and the recent paper by Guth and Gromov. Actually expanders were discovered for this reason. The alternative is to think about the linear structure, namely you can ask whether there are affine subspaces of the ambient space intersecting many of the edges for any embedding. In a recent joint paper with Boris Bukh we called this "space crossings". Because if the affine flat that intersects your edges is of dimension 0 this is precisely a crossing. We investigated the "space crossing numbers" of graphs in $R^3$, but our techniques generalize to graphs in $R^d$. The first result in this direction was Zivaljevic's who proved that $K_{6,6} \to R^3$ has non zero space crossing number. Our main result is an analogue of the classical crossing number inequality which almost implies it. - 1 I'm leaving this as a comment rather than an answer because it's really the same as what Alfredo already said, but for more of what he mentions in his first paragraph, at a nontechnical level, see en.wikipedia.org/wiki/Linkless_embedding – David Eppstein Oct 30 2011 at 2:55 "in all the previous results is very important that you are dealing with codimension two". The Conway-Gordon/Sachs result has NOTHING to do with codimension two: any map of the $n$-skeleton of the $(2n+3)$-simplex in $\Bbb R^{2n+1}$ contains a pair of disjoint linked boundaries of the $(n+1)$-simplex (Lovasz-Schrijver, ams.org/journals/proc/1998-126-05/… and Taniyama, pjm.berkeley.edu/pjm/2000/194-2/p14.xhtml; a third proof is in Example 4.7 in arxiv.org/abs/math/0612082 and a fourth in Example 4.9 in arxiv.org/abs/1103.5457v2). – Sergey Melikhov Oct 30 2011 at 18:30 "... as a famous result of Zeeman says". The fact that every graph unknots in $\Bbb R^n$ for $n>3$ is trivial (use general position) and has nothing to do with Zeeman. Zeeman's result is about piecewise-linear unknotting of spheres in codimension $\ge 3$. But spheres easily link, and connected manifolds easily knot in high codimensions. In fact, "your favorite" link invariant used in "Ramsey link" theory (and I've seen papers dealing with the Sato-Levine invariant and Milnor's triple invariant) probably has a higher-dimensional extension (certainly in those two cases). – Sergey Melikhov Oct 30 2011 at 19:09 (con't) In more detail, higher-dim extensions of Milnor's triple invariant detect a Brunnian "Borromean rings" link of three $S^{2k−1}$'s in $\Bbb R^{3k}$, and a higher-dim counterpart of the Sato-Levine invariant (not the original higher-dim Sato-Levine invariant) detects a "Whitehead link" of two $S^{2k−1}$'s in $\Bbb R^{3k}$, $k\ne 3,7$, which has zero linking number. – Sergey Melikhov Oct 30 2011 at 19:20 Finally, the Robertson-Seymour-Thomas result about minors is likely to have an analogue for linkless embeddings of $n$-dimensional simplicial complexes in $\Bbb R^{2n+1}$, $n\ne 2$ (see arxiv.org/abs/1103.5457v2), but I'd skeptical about lower codimension, especially codimension two ($K^n$ in $\Bbb R^{n+2}$ for $n>1$) In fact, I haven't seen any results whatsoever on "codimension two Ramsey theory" ($K^n$ in $\Bbb R^{n+2}$) except for the classical case ($n=1$). – Sergey Melikhov Oct 30 2011 at 19:27 show 2 more comments Yes, there's plenty of work on this. First of all, you have to define the notion of equivalence that you are interested in. Usually people only care about the graph up to handle-slide (turning the subject into the subject of knotted handlebodies), so you can assume the graph is tri-valent. But you could go further to study graphs up to isotopy and there's work on that too. Much of the technology to study knots translates to studying knotted graphs. Some references: http://katlas.org/drorbn/index.php?title=The_Alexander_Polynomial_of_a_Knotted_Trivalent_Graph http://katlas.org/drorbn/index.php?title=The_Kontsevich_Integral_for_Knotted_Trivalent_Graphs http://ldtopology.wordpress.com/2009/10/29/which-knotted-objects-are-worthy-of-study/ http://www.ms.unimelb.edu.au/~snap/ http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.em/1276784791 The last two references are rather nice as they show that much the same way hyperbolic geometry "dominates" traditional knot theory, it plays a similar role in the study of knotted trivalent graphs. In this case orbifolds play a more prominent role. - The first katlas link looks puzzling. Is there any paper about this Reidemeister torsion of graph complement? I remember finding Viro's Alexander polynomial/Conway function of trivalent graphs (arxiv.org/abs/math/0204290, mi.mathnet.ru/eng/aa74) to be quite enlightening (e.g. in trying to understand the ordinary multivariable Alexander polynomial). But is it related to the Reidemeister torsion? For links, the relation is made very clear in arxiv.org/abs/math/9806035, but this doesn't work for graphs, does it? – Sergey Melikhov Oct 31 2011 at 20:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9227115511894226, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/47137-power-mean.html
# Thread: 1. ## Power mean From the power mean inequality $P_r = \left(\frac{a_{1}^{r} + \ldots + a_{n}^{r} }{n}\right)^{1/r}$ why does $\lim_{r \to 0} P_{r} = (a_{1}a_{2} \cdots a_{n})^{1/n}$? 2. Originally Posted by particlejohn From the power mean inequality $P_r = \left(\frac{a_{1}^{r} + \ldots + a_{n}^{r} }{n}\right)^{1/r}$ why does $\lim_{r \to 0} P_{r} = (a_{1}a_{2} \cdots a_{n})^{1/n}$? There's a proof in this link: Generalized mean - Wikipedia, the free encyclopedia 3. First: $<br /> \mathop {\lim }\limits_{r \to 0} \log \left( {P_r } \right) = \mathop {\lim }\limits_{r \to 0} \tfrac{1}<br /> {r} \cdot \log \left( {\tfrac{{\sum\nolimits_{k = 1}^n {a_k ^r } }}<br /> {n}} \right)<br />$ Now we have: $<br /> \log \left( {\tfrac{{\sum\nolimits_{k = 1}^n {a_k ^r } }}<br /> {n}} \right)\mathop \sim \limits_{r \to 0} \tfrac{{\sum\nolimits_{k = 1}^n {a_k ^r } }}<br /> {n} - 1 = \tfrac{{\sum\nolimits_{k = 1}^n {\left( {a_k ^r - 1} \right)} }}<br /> {n}<br />$ (since what's inside the logarithm tends to 1) So: $<br /> \mathop {\lim }\limits_{r \to 0} \log \left( {P_r } \right) = \mathop {\lim }\limits_{r \to 0} \tfrac{1}<br /> {r} \cdot \tfrac{{\sum\nolimits_{k = 1}^n {\left( {a_k ^r - 1} \right)} }}<br /> {n} = \tfrac{1}<br /> {n} \cdot \sum\nolimits_{k = 1}^n {\mathop {\lim }\limits_{r \to 0} \tfrac{1}<br /> {r} \cdot \left( {a_k ^r - 1} \right)} <br />$ (by the linearity of the limit- all of those are defined-) $<br /> \mathop {\lim }\limits_{r \to 0} \tfrac{1}<br /> {r} \cdot \left( {a_k ^r - 1} \right) = \mathop {\lim }\limits_{r \to 0} \tfrac{1}<br /> {r} \cdot \left( {e^{r \cdot \log \left( {a_k } \right)} - 1} \right) = \log \left( {a_k } \right)<br />$ So: $<br /> \mathop {\lim }\limits_{r \to 0} \log \left( {P_r } \right) = \tfrac{1}<br /> {n} \cdot \sum\nolimits_{k = 1}^n {\log \left( {a_k } \right)} <br />$ and the rest follows easily by the properties of the logarithm and the continuity of the exponential function.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9155404567718506, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?s=92c107f94da3f1e16b1b664c5df88e0a&p=4274260
Physics Forums ## How using a mirror to find the tangent at a point on the curve works Hi, I recently learned that to find the tangent at a point on any curve, you can simply place a mirror on that point and reflect the part of the curve on one side of that point such that the reflection flows smoothly into the other part of the curve on the other side. Once this is done, draw a line along the mirror, and this line would be perpendicular to the tangent. However, I do not understand the principle behind this method. How exactly does ensuring that the reflection of one side of the curve flows smoothly into the other side result in the mirror being perpendicular to the tangent? My best guess as of now is that the point must be the point about which the curve is exactly symmetrical, in which case it would make sense that if you reflect one part of the curve into the other part, the perpendicular line which you draw along the mirror would be the line of symmetry. But doesn't this mean that this method would not work if your point is not the point of symmetry? Thank you! Recognitions: Homework Help You should actually do it. Draw a random curve and get a mirror. If the mirror is not on the normal to the curve, then the reflection + the curve line will show a sharp bend at the mirror. Any curve that starts at that point, to be continuous, must join at the tangent angle so it does not matter that the actual curve is not a mirror reflection. Just try drawing a line where this does not happen. Oh, so let me see if I understand you correctly. If we have a curve, and the point we are taking the tangent about is X, and to the right of X is A, and to the left of X is B. If we put a mirror at X which reflects A, the reflection must flow into A, but it does not need to flow into B. (Though my notes mentioned that the reflection must flow into both A and B?) And as long as the A and the reflection are continuous, the mirror will definitely be at a normal to X. So it does not matter whether the curve is symmetrical about X. Is that right? Okay, I can understand that if you perform this on a hundred different curves, the result will always be the same. But what I wish to know is not the rule derived from experimentation, but rather, the principle behind this rule. Or is this one of those scenarios where there is no explanation, but that it just works? Recognitions: Homework Help ## How using a mirror to find the tangent at a point on the curve works If you're interested, here's a video that explains it, and also gives some examples of its real life applications. If you'd like to do some Maths on it yourself to see why it works, draw two non parallel lines that intersect, and label the acute angle between them as $\alpha$. Now, draw a ray hitting the first line (imagining it's a mirror), and then at the point of contact between these two lines, draw a dotted perpendicular line. The angle between the perpendicular and the light ray can be labelled $\theta$. Now as you should know due to the law of reflection, the angle of incidence equals the angle of reflection, so you'll need to label the angle between the perpendicular and the ray bouncing away from the line as $\theta$ as well. Now do the same with the ray bouncing from the first line into the second, but this time with another angle, say, $\phi$. What algebraic formulae can you up with to describe this situation? Think about the triangle with the angle $\alpha$, and the sum of the angles in a triangle. And then look for a second relationship. Recognitions: Homework Help Ah - thank you mentallic. I was about to explain that this will only work for continuous curves. I was suggesting the experiment because in the act of doing it realisation of the underlying principle will come. Recognitions: Homework Help Quote by Simon Bridge Ah - thank you mentallic. I was about to explain that this will only work for continuous curves. Well of course if we found a tangent at a point where the curve isn't continuous, we're doing it wrong Quote by Simon Bridge I was suggesting the experiment because in the act of doing it realisation of the underlying principle will come. Or if you can't get two mirrors that are free to rotate, some accurate diagrams will do just as well. Recognitions: Homework Help I think you can see the effect with just one plane mirror. Tags mirror, perpendicular, reflect, reflection, tangent Thread Tools | | | | |-------------------------------------------------------------------------------------------|----------------------------|---------| | Similar Threads for: How using a mirror to find the tangent at a point on the curve works | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 7 | | | Calculus & Beyond Homework | 1 | | | Calculus & Beyond Homework | 8 | | | Calculus & Beyond Homework | 23 | | | Calculus & Beyond Homework | 4 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552925229072571, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/115166/cardinality-of-subset-and-entropy
# cardinality of subset and entropy [closed] (edited) I am considering the problem about the cardinality of a proper subset, $A\subset\{0,1\}^d$ where $d$ is an integer. Of course, $|A|<2^d$. I am wondering if there is a tighter bound for it. As suggested by Ashok, I suspect the subset cardinality has some thing to do with the entropy, $H(p^*)$ , where $P^*=\sum _{x\in A}P(x)$, and $P(x)$ is the (empirical distribution) type of x. Maybe be the form of $|A|<2^{d*H(P^*)}$, which is similar to the form suggested by Ashok, any idea will be appreciated. Thanks - It doesn't make any sense when you write $P^*=\sum _{x\in A}P(x)$. What do we know about $A$? If $A$ is set of all sequences of a particular type $P$, then $|A|\le 2^{nH(P)}$. – Ashok Mar 1 '12 at 6:53 I think that [cardinals] is aimed to infinite cardinals sort of questions. – Asaf Karagila Mar 1 '12 at 8:38 $A$ is a general subset of the d-dimensional binary set. We know the distribution of each elements in the binary set. Is this sufficient to bound the size of the subset? – johnniac Mar 1 '12 at 16:08 2 I don't understand. If A is ANY proper subset, then the best estimate without knowing more information is $|A|<2^d$ – you Mar 1 '12 at 17:22 @johnniac: You need to make things clear here. If $P$ is the type of a sequence $x$, then $P$ would be a Prob. distribution on $\{0,1\}$, so what do you mean by $P(x)$, for $x\in A$? – Ashok Mar 2 '12 at 6:04 ## closed as not a real question by Asaf Karagila, Micah, Brandon Carter, Calvin Lin, NorbertJan 30 at 7:30 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, see the FAQ. ## 1 Answer I can only think about the cardinality of the typical set $A_\epsilon^{(d)}$ with respect to your distribution on the random variable $X$. If that is what you mean, then it is well known that: \begin{equation} 2^{n(H(X)+\epsilon)}\geq |A_\epsilon^{(d)}|\geq (1-\epsilon)2^{n(H(X)-\epsilon)} \end{equation} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295528531074524, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/classical-physics+regularization
# Tagged Questions 0answers 111 views ### Functional determinant approximation Let the Hamiltonian in one dimension be $H+z$, then I would like to evaluate $\det(H+z)$. I have thought that if I know the function $Z(t) = \sum_{n>0}\exp(-tE_{n})$ I can use \sum_{n} ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8669191002845764, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Chi-squared_distribution
# Chi-squared distribution Notation Probability density function Cumulative distribution function $\chi^2(k)\!$ or $\chi^2_k\!$ $k \in \mathbb{N}~~$ (known as "degrees of freedom") x ∈ [0, +∞) $\frac{1}{2^{\frac{k}{2}}\Gamma\left(\frac{k}{2}\right)}\; x^{\frac{k}{2}-1} e^{-\frac{x}{2}}\,$ $\frac{1}{\Gamma\left(\frac{k}{2}\right)}\;\gamma\left(\frac{k}{2},\,\frac{x}{2}\right)$ k $\approx k\bigg(1-\frac{2}{9k}\bigg)^3$ max{ k − 2, 0 } 2k $\scriptstyle\sqrt{8/k}\,$ 12 / k $\frac{k}{2}\!+\!\ln(2\Gamma(k/2))\!+\!(1\!-\!k/2)\psi(k/2)$ (1 − 2 t)−k/2   for  t  < ½ (1 − 2 i t)−k/2      [1] In probability theory and statistics, the chi-squared distribution (also chi-square or ) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. It is one of the most widely used probability distributions in inferential statistics, e.g., in hypothesis testing or in construction of confidence intervals.[2][3][4][5] When there is a need to contrast it with the noncentral chi-squared distribution, this distribution is sometimes called the central chi-squared distribution. The chi-squared distribution is used in the common chi-squared tests for goodness of fit of an observed distribution to a theoretical one, the independence of two criteria of classification of qualitative data, and in confidence interval estimation for a population standard deviation of a normal distribution from a sample standard deviation. Many other statistical tests also use this distribution, like Friedman's analysis of variance by ranks. The chi-squared distribution is a special case of the gamma distribution. ## Definition If Z1, ..., Zk are independent, standard normal random variables, then the sum of their squares, $Q\ = \sum_{i=1}^k Z_i^2 ,$ is distributed according to the chi-squared distribution with k degrees of freedom. This is usually denoted as $Q\ \sim\ \chi^2(k)\ \ \text{or}\ \ Q\ \sim\ \chi^2_k .$ The chi-squared distribution has one parameter: k — a positive integer that specifies the number of degrees of freedom (i.e. the number of Zi’s) ## Characteristics Further properties of the chi-squared distribution can be found in the box at the upper right corner of this article. ### Probability density function The probability density function (pdf) of the chi-squared distribution is $f(x;\,k) = \begin{cases} \frac{x^{(k/2)-1} e^{-x/2}}{2^{k/2} \Gamma\left(\frac{k}{2}\right)}, & x \geq 0; \\ 0, & \text{otherwise}. \end{cases}$ where Γ(k/2) denotes the Gamma function, which has closed-form values for integer k. For derivations of the pdf in the cases of one, two and k degrees of freedom, see Proofs related to chi-squared distribution. ### Cumulative distribution function Chernoff bound for the CDF and tail (1-CDF) of a chi-square random variable with ten degrees of freedom (k = 10) Its cumulative distribution function is: $F(x;\,k) = \frac{\gamma(\frac{k}{2},\,\frac{x}{2})}{\Gamma(\frac{k}{2})} = P\left(\frac{k}{2},\,\frac{x}{2}\right),$ where γ(k,z) is the lower incomplete Gamma function and P(k,z) is the regularized Gamma function. In a special case of k = 2 this function has a simple form: $F(x;\,2) = 1 - e^{-\frac{x}{2}}.$ For the cases when 0 < z < 1 (which include all of the cases when this CDF is less than half), the following Chernoff upper bound may be obtained:[6] $F(z k;\,k) \leq (z e^{1-z})^{k/2}.$ The tail bound for the cases when z > 1 follows similarly $1-F(z k;\,k) \leq (z e^{1-z})^{k/2}.$ Tables of this cumulative distribution function are widely available and the function is included in many spreadsheets and all statistical packages. For another approximation for the CDF modeled after the cube of a Gaussian, see under Noncentral chi-squared distribution. ### Additivity It follows from the definition of the chi-squared distribution that the sum of independent chi-squared variables is also chi-squared distributed. Specifically, if {Xi}i=1n are independent chi-squared variables with {ki}i=1n degrees of freedom, respectively, then Y = X1 + ⋯ + Xn is chi-squared distributed with k1 + ⋯ + kn degrees of freedom. ### Information entropy The information entropy is given by $H = \int_{-\infty}^\infty f(x;\,k)\ln f(x;\,k) \, dx = \frac{k}{2} + \ln\!\left[2\,\Gamma\!\left(\frac{k}{2}\right)\right] + \left(1-\frac{k}{2}\right)\, \psi\!\left[\frac{k}{2}\right],$ where ψ(x) is the Digamma function. The Chi-squared distribution is the maximum entropy probability distribution for a random variate X for which $E(X)=\nu$ and $E(\ln(X))=\psi\left(\frac{1}{2}\right)+\ln(2)$ are fixed.[7] ### Noncentral moments The moments about zero of a chi-squared distribution with k degrees of freedom are given by[8][9] $\operatorname{E}(X^m) = k (k+2) (k+4) \cdots (k+2m-2) = 2^m \frac{\Gamma(m+\frac{k}{2})}{\Gamma(\frac{k}{2})}.$ ### Cumulants The cumulants are readily obtained by a (formal) power series expansion of the logarithm of the characteristic function: $\kappa_n = 2^{n-1}(n-1)!\,k$ ### Asymptotic properties By the central limit theorem, because the chi-squared distribution is the sum of k independent random variables with finite mean and variance, it converges to a normal distribution for large k. For many practical purposes, for k > 50 the distribution is sufficiently close to a normal distribution for the difference to be ignored.[10] Specifically, if X ~ χ²(k), then as k tends to infinity, the distribution of $(X-k)/\sqrt{2k}$ tends to a standard normal distribution. However, convergence is slow as the skewness is $\sqrt{8/k}$ and the excess kurtosis is 12/k. • The sampling distribution of ln(σ2) converges to normality much faster than the sampling distribution of σ2,[11] as the logarithm removes much of the asymmetry.[12] Other functions of the chi-squared distribution converge more rapidly to a normal distribution. Some examples are: • If X ~ χ²(k) then $\scriptstyle\sqrt{2X}$ is approximately normally distributed with mean $\scriptstyle\sqrt{2k-1}$ and unit variance (result credited to R. A. Fisher). • If X ~ χ²(k) then $\scriptstyle\sqrt[3]{X/k}$ is approximately normally distributed with mean $\scriptstyle 1-2/(9k)$ and variance $\scriptstyle 2/(9k) .$[13] This is known as the Wilson-Hilferty transformation. ## Relation to other distributions Approximate formula for median compared with numerical quantile (top) as presented in SAS Software. Difference between numerical quantile and approximate formula (bottom). • As $k\to\infty$, $(\chi^2_k-k)/\sqrt{2k} \xrightarrow{d}\ N(0,1) \,$ (normal distribution) • $\chi_k^2 \sim {\chi'}^2_k(0)$ (Noncentral chi-squared distribution with non-centrality parameter $\lambda = 0$) • If $X \sim \mathrm{F}(\nu_1, \nu_2)$ then $Y = \lim_{\nu_2 \to \infty} \nu_1 X$ has the chi-squared distribution $\chi^2_{\nu_{1}}$ • As a special case, if $X \sim \mathrm{F}(1, \nu_2)\,$ then $Y = \lim_{\nu_2 \to \infty} X\,$ has the chi-squared distribution $\chi^2_{1}$ • $\|\boldsymbol{N}_{i=1,...,k}{(0,1)}\|^2 \sim \chi^2_k$ (The squared norm of k standard normally distributed variables is a chi-squared distribution with k degrees of freedom) • If $X \sim {\chi}^2(\nu)\,$ and $c>0 \,$, then $cX \sim {\Gamma}(k=\nu/2, \theta=2c)\,$. (gamma distribution) • If $X \sim \chi^2_k$ then $\sqrt{X} \sim \chi_k$ (chi distribution) • If $X \sim \chi^2 \left( 2 \right)$, then $X \sim \mathrm{Exp(1/2)}$ is an exponential distribution. (See Gamma distribution for more.) • If $X \sim \mathrm{Rayleigh}(1)\,$ (Rayleigh distribution) then $X^2 \sim \chi^2(2)\,$ • If $X \sim \mathrm{Maxwell}(1)\,$ (Maxwell distribution) then $X^2 \sim \chi^2(3)\,$ • If $X \sim \chi^2(\nu)$ then $\tfrac{1}{X} \sim \mbox{Inv-}\chi^2(\nu)\,$ (Inverse-chi-squared distribution) • The chi-squared distribution is a special case of type 3 Pearson distribution • If $X \sim \chi^2(\nu_1)\,$ and $Y \sim \chi^2(\nu_2)\,$ are independent then $\tfrac{X}{X+Y} \sim {\rm Beta}(\tfrac{\nu_1}{2}, \tfrac{\nu_2}{2})\,$ (beta distribution) • If $X \sim {\rm U}(0,1)\,$ (uniform distribution) then $-2\log{(U)} \sim \chi^2(2)\,$ • $\chi^2(6)\,$ is a transformation of Laplace distribution • If $X_i \sim \mathrm{Laplace}(\mu,\beta)\,$ then $\sum_{i=1}^n{\frac{2 |X_i-\mu|}{\beta}} \sim \chi^2(2n)\,$ • chi-squared distribution is a transformation of Pareto distribution • Student's t-distribution is a transformation of chi-squared distribution • Student's t-distribution can be obtained from chi-squared distribution and normal distribution • Noncentral beta distribution can be obtained as a transformation of chi-squared distribution and Noncentral chi-squared distribution • Noncentral t-distribution can be obtained from normal distribution and chi-squared distribution A chi-squared variable with k degrees of freedom is defined as the sum of the squares of k independent standard normal random variables. If Y is a k-dimensional Gaussian random vector with mean vector μ and rank k covariance matrix C, then X = (Y−μ)TC−1(Y−μ) is chi-squared distributed with k degrees of freedom. The sum of squares of statistically independent unit-variance Gaussian variables which do not have mean zero yields a generalization of the chi-squared distribution called the noncentral chi-squared distribution. If Y is a vector of k i.i.d. standard normal random variables and A is a k×k idempotent matrix with rank k−n then the quadratic form YTAY is chi-squared distributed with k−n degrees of freedom. The chi-squared distribution is also naturally related to other distributions arising from the Gaussian. In particular, • Y is F-distributed, Y ~ F(k1,k2) if $\scriptstyle Y = \frac{X_1 / k_1}{X_2 / k_2}$ where X1 ~ χ²(k1) and X2  ~ χ²(k2) are statistically independent. • If X is chi-squared distributed, then $\scriptstyle\sqrt{X}$ is chi distributed. • If X1  ~  χ2k1 and X2  ~  χ2k2 are statistically independent, then X1 + X2  ~ χ2k1+k2. If X1 and X2 are not independent, then X1 + X2 is not chi-squared distributed. ## Generalizations The chi-squared distribution is obtained as the sum of the squares of k independent, zero-mean, unit-variance Gaussian random variables. Generalizations of this distribution can be obtained by summing the squares of other types of Gaussian random variables. Several such distributions are described below. ### Chi-squared distributions #### Noncentral chi-squared distribution Main article: Noncentral chi-squared distribution The noncentral chi-squared distribution is obtained from the sum of the squares of independent Gaussian random variables having unit variance and nonzero means. #### Generalized chi-squared distribution Main article: Generalized chi-squared distribution The generalized chi-squared distribution is obtained from the quadratic form z′Az where z is a zero-mean Gaussian vector having an arbitrary covariance matrix, and A is an arbitrary matrix. ### Gamma, exponential, and related distributions The chi-squared distribution X ~&nbsp;χ²(k) is a special case of the gamma distribution, in that X ~ Γ(k/2, 1/2) using the rate parameterization of the gamma distribution (or X ~ Γ(k/2, 2) using the scale parameterization of the gamma distribution) where k is an integer. Because the exponential distribution is also a special case of the Gamma distribution, we also have that if X ~ χ²(2), then X ~ Exp(1/2) is an exponential distribution. The Erlang distribution is also a special case of the Gamma distribution and thus we also have that if X ~ χ²(k) with even k, then X is Erlang distributed with shape parameter k/2 and scale parameter 1/2. ## Applications The chi-squared distribution has numerous applications in inferential statistics, for instance in chi-squared tests and in estimating variances. It enters the problem of estimating the mean of a normally distributed population and the problem of estimating the slope of a regression line via its role in Student’s t-distribution. It enters all analysis of variance problems via its role in the F-distribution, which is the distribution of the ratio of two independent chi-squared random variables, each divided by their respective degrees of freedom. Following are some of the most common situations in which the chi-squared distribution arises from a Gaussian-distributed sample. • if X1, ..., Xn are i.i.d. N(μ, σ2) random variables, then $\sum_{i=1}^n(X_i - \bar X)^2 \sim \sigma^2 \chi^2_{n-1}$ where $\bar X = \frac{1}{n} \sum_{i=1}^n X_i$. • The box below shows probability distributions with name starting with chi for some statistics based on Xi ∼ Normal(μi, σ2i), i = 1, ⋯, k, independent random variables: Name Statistic chi-squared distribution $\sum_{i=1}^k \left(\frac{X_i-\mu_i}{\sigma_i}\right)^2$ noncentral chi-squared distribution $\sum_{i=1}^k \left(\frac{X_i}{\sigma_i}\right)^2$ chi distribution $\sqrt{\sum_{i=1}^k \left(\frac{X_i-\mu_i}{\sigma_i}\right)^2}$ noncentral chi distribution $\sqrt{\sum_{i=1}^k \left(\frac{X_i}{\sigma_i}\right)^2}$ ## Table of χ2 value vs p-value The p-value is the probability of observing a test statistic at least as extreme in a chi-squared distribution. Accordingly, since the cumulative distribution function (CDF) for the appropriate degrees of freedom (df) gives the probability of having obtained a value less extreme than this point, subtracting the CDF value from 1 gives the p-value. The table below gives a number of p-values matching to χ2 for the first 10 degrees of freedom. A p-value of 0.05 or less is usually regarded as statistically significant, i.e. the observed deviation from the null hypothesis is significant. Degrees of freedom (df) χ2 value [14] 1 0.004 0.02 0.06 0.15 0.46 1.07 1.64 2.71 3.84 6.64 10.83 2 0.10 0.21 0.45 0.71 1.39 2.41 3.22 4.60 5.99 9.21 13.82 3 0.35 0.58 1.01 1.42 2.37 3.66 4.64 6.25 7.82 11.34 16.27 4 0.71 1.06 1.65 2.20 3.36 4.88 5.99 7.78 9.49 13.28 18.47 5 1.14 1.61 2.34 3.00 4.35 6.06 7.29 9.24 11.07 15.09 20.52 6 1.63 2.20 3.07 3.83 5.35 7.23 8.56 10.64 12.59 16.81 22.46 7 2.17 2.83 3.82 4.67 6.35 8.38 9.80 12.02 14.07 18.48 24.32 8 2.73 3.49 4.59 5.53 7.34 9.52 11.03 13.36 15.51 20.09 26.12 9 3.32 4.17 5.38 6.39 8.34 10.66 12.24 14.68 16.92 21.67 27.88 10 3.94 4.86 6.18 7.27 9.34 11.78 13.44 15.99 18.31 23.21 29.59 P value (Probability) 0.95 0.90 0.80 0.70 0.50 0.30 0.20 0.10 0.05 0.01 0.001 Non-significant Significant ## History This distribution was first described by the German statistician Helmert in papers of 1875/1876,[15][16] where he computed the sampling distribution of the sample variance of a normal population. Thus in German this was traditionally known as the Helmertsche ("Helmertian") or "Helmert distribution". The distribution was independently rediscovered by Karl Pearson in the context of goodness of fit, for which he developed his Pearson's chi-squared test, published in (Pearson 1900), with computed table of values published in (Elderton 1902), collected in (Pearson 1914, pp. xxxi–xxxiii, 26–28, Table XII). Further developments and corrections to the early work were due to Fisher in the 1920s.[15] ## References 1. M.A. Sanders. "Characteristic function of the central chi-squared distribution". Retrieved 2009-03-06. 2. Abramowitz, Milton; Stegun, Irene A., eds. (1965), "Chapter 26", , New York: Dover, p. 940, ISBN 978-0486612720, MR 0167642 . 3. Jonhson, N.L.; S. Kotz, , N. Balakrishnan (1994). Continuous Univariate Distributions (Second Ed., Vol. 1, Chapter 18). John Willey and Sons. ISBN 0-471-58495-9. 4. Mood, Alexander; Franklin A. Graybill, Duane C. Boes (1974). Introduction to the Theory of Statistics (Third Edition, p. 241-246). McGraw-Hill. ISBN 0-07-042864-6. 5. Dasgupta, Sanjoy D. A.; Gupta, Anupam K. (2002). "An Elementary Proof of a Theorem of Johnson and Lindenstrauss". Random Structures and Algorithms 22: 60–65. Retrieved 2012-05-01. 6. Park, Sung Y.; Bera, Anil K. (2009). "Maximum entropy autoregressive conditional heteroskedasticity model". Journal of Econometrics (Elsevier): 219–230. Retrieved 2011-06-02. 7. Chi-squared distribution, from MathWorld, retrieved Feb. 11, 2009 8. M. K. Simon, Probability Distributions Involving Gaussian Random Variables, New York: Springer, 2002, eq. (2.35), ISBN 978-0-387-34657-1 9. Box, Hunter and Hunter (1978). Statistics for experimenters. Wiley. p. 118. ISBN 0471093157. 10. Wilson, E.B.; Hilferty, M.M. (1931) "The distribution of chi-squared". Proceedings of the National Academy of Sciences, Washington, 17, 684–688. 11. ^ a b 12. F. R. Helmert, "Ueber die Wahrscheinlichkeit der Potenzsummen der Beobachtungsfehler und über einige damit im Zusam- menhange stehende Fragen", Zeitschrift für Mathematik und Physik 21, 1876, S. 102–219 • Hald, Anders (1998). A history of mathematical statistics from 1750 to 1930. New York: Wiley. ISBN 0-471-17912-4. • Elderton, William Palin (1902). "Tables for Testing the Goodness of Fit of Theory to Observation". Biometrika 1 (2): 155–163. doi:10.1093/biomet/1.2.155.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 67, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7904515862464905, "perplexity_flag": "middle"}
http://nrich.maths.org/oddsandevens
### Coin Tossing Games You and I play a game involving successive throws of a fair coin. Suppose I pick HH and you pick TH. The coin is thrown repeatedly until we see either two heads in a row (I win) or a tail followed by a head (you win). What is the probability that you win? ### Win or Lose? A gambler bets half the money in his pocket on the toss of a coin, winning an equal amount for a head and losing his money if the result is a tail. After 2n plays he has won exactly n times. Has he more money than he started with? ### Thank Your Lucky Stars A counter is placed in the bottom right hand corner of a grid. You toss a coin and move the star according to the following rules: ... What is the probability that you end up in the top left-hand corner of the grid? # Odds and Evens ##### Stage: 3 and 4 Challenge Level: Here is a set of numbered balls used for a game: To play the game, the balls are mixed up and two balls are randomly picked out together. For example: The numbers on the balls are added together: $4 + 5 = 9$ If the total is even, you win. If the total is odd, you lose. How can you decide whether the game is fair? Here are three more sets of balls: Which set would you choose to play with, to maximise your chances of winning? What proportion of the time would you expect to win each game? Test your predictions using the interactivity. This text is usually replaced by the Flash movie. Is it possible to produce a fair game? Can you find a set of balls where the chance of getting an even total is the same as the chance of getting an odd total? Can you find more than one such set? The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550797939300537, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/21552?sort=votes
## roadmap for studying arithmetic geometry ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) hi everybody, I have already finished the Hartshorne's algebraic geometry from chapter 1 to chapter 4, so I'd like to find some suggestions about the next step to study arithmetic geometry. I want to know that how to use the scheme theory and their cohomology to solove the arithmeic problem.Would you like to recommend me some of these kind of books and papers? Thank you very much! PS: I also want to learn some materials about moduli theory, if you like, could you recommend me some books or papers ? - 1 From my point of view, it is very important to have a sound basis in algebraic number theory. – Regenbogen Apr 16 2010 at 20:57 ## 6 Answers My suggestion, if you have really worked through most of Hartshorne, is to begin reading papers, referring to other books as you need them. One place to start is Mazur's "Eisenstein Ideal" paper. The suggestion of Cornell--Silverman is also good. (This gives essentially the complete proof, due to Faltings, of the Tate conjecture for abelian varieties over number fields, and of the Mordell conjecture.) You might also want to look at Tate's original paper on the Tate conjecture for abelian varieties over finite fields, which is a masterpiece. Another possibility is to learn etale cohomology (which you will have to learn in some form or other if you want to do research in arithemtic geometry). For this, my suggestion is to try to work through Deligne's first Weil conjectures paper (in which he proves the Riemann hypothesis), referring to textbooks on etale cohomology as you need them. - Do you happen to know where those Deligne papers were published? – Joel Dodge Apr 16 2010 at 17:18 IHES, I think. It is easy to find on mathscinet, or probably just typing "Weil I Deligne" into Google will find it. – Emerton Apr 16 2010 at 20:35 Matt, not that it matters, but due to the multi-year gap between the two papers probably it wasn't called Weil I, for the same reason World War I wasn't called that at the time of its "creation" (who expected WWII?). – BCnrd Apr 17 2010 at 3:12 I wondered about this, but nevertheless, typing in "Weil I Deligne" gives the wikipedia entry as the first link, which in turn gives the reference. (This is what I thought would happen. As it turns out, it is titled "La conjecture de Weil: I" (!), and did appear in Publications of the IHES, vol. 43.) – Emerton Apr 17 2010 at 3:38 2 And the introduction states straight away that Deligne intends to publish a second article on the subject. The delay seems to be part of the never ending turmoil concerninh SGA 4,5 and 5. – Olivier Apr 20 2010 at 9:29 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If you can find a (say, library) copy of Cornell and Silverman's Arithmetic Geometry I would highly recommend it. It is a comprehensive treatment of the arithmetic theory of abelian varieties using the modern scheme-theoretic language. Lamentably it's basically impossible to buy a copy these days (there's usually one available on-line from some obscure seller for something like \$950). I also agree with the above recommendations of Liu's Algebraic Geometry and Arithmetic Curves. It builds scheme theory from scratch (even developing the necessary commutative algebra in first chapter) and has an eye towards arithmetic applications throughout. In particular, the end of the book has a great chapter on reduction of curves. If you want a treatment of elliptic curves in extreme generality (using scheme language) then you might be interested in Katz' and Mazur's Arithmetic Moduli of Elliptic Curves. I emphasize however, that this particular book is very difficult (at least for me it is). - +1: Katz-Mazur is especially excellent. – stankewicz Apr 16 2010 at 13:47 20 Mumford's "Abelian Varieties" (note the appendices), Ch. 6 of his GIT book, sga3 on quotients, "Neron Models" for good techniques & much more, books on etale cohomology (Freitah-Kiehl, Milne), SGA1 (skip boring expose on fibered categories), Tate's papers on p-divisible groups & Honda-Tate theory, Katz' paper on p-adic properties of modular forms, FGA Explained (for Hilbert & Picard schemes, needed for relative Jacobians and much more). Illuminating to read EGA I (e.g., good intro to formal schemes, needed for serious deformation theory and Tate curve and beyond). – BCnrd Apr 16 2010 at 14:21 1 Because it uses Drinfeld's notion of a "Drinfeld basis" to define p-power level structures in char. p. This gives an important tool that is not in Deligne--Rapoport. (I should add, I don't know if this makes it "preferable", but it is the main technical innovation of Katz--Mazur.) – Emerton Apr 16 2010 at 16:12 2 To augment Emerton's comment, KM works nicely over $\mathbf{Z}$ but gives no conceptual technique at cusps whereas DR provides good technique at the cusps but inverts the level. Funny part is that when KM deal with cusps, they list axioms concerning Tate curve and one needs DR techniques to justify everything in their axioms. So the KM approach to handling cusps requires theory of generalized elliptic curves even though KM never mentions that concept, so they sidestep if their proper $\mathbf{Z}$-curves are moduli spaces for generalized elliptic curves with Drinfeld structure (they are). – BCnrd Apr 16 2010 at 19:24 3 OK, I'm not endorsing anything, but there's this website called gigapedia.org. It may or may not help in situations where a book is out of print and unattainable for less than US\$1000. – Pietro KC Apr 17 2010 at 5:37 show 3 more comments An apology first: This is more a supplement to Charles' answer than an answer itself. This was originally a set of comments, but I was not able to format the comments so as to be readable. "Arithmetic of Elliptic curves" is particularly recommended for those who want a first look at arithmetic applications of cohomology. Chapter 8 proves the Mordell-Weil theorem using Galois cohomology. Pretty much everything in this book is good though and the only overlap with Hartshorne is in the first two chapters. It's the canonical book for elliptic curves for a reason! "Rational Points on Elliptic curves" would probably not be so exciting for someone who's already gone through Hartshorne. "Advanced Topics" is exactly that, but maybe a little more friendly than most topics books. The chapters are essentially free standing. Of particular interest might be the chapter on Elliptic surfaces which give a peek at ℤ schemes in (almost) all their glory. I've only glanced through Hindry-Silverman, so I couldn't say much either way. "An Invitation to Arithmetic Geometry" for this reader would primarily serve to highlight how Algebraic Number Theory intersects Arithmetic Geometry, I think. "Algebraic Geometry and Arithmetic Curves" is a fantastic reference for Arithmetic Geometry, and there's quite a lot of overlap with Hartshorne. edit: For moduli of elliptic curves, Chapter 1 (Modular forms) of "Advanced topics" is a good place to start, and Katz-Mazur is a good eventual target. Between those two, there are lots of books on modular forms and moduli spaces to fill the gap. I'm partial to Diamond and Shurman, but the original works of Shimura deserve recognition here. Your mileage may vary. - "Algebraic Geometry and Arithmetic Curves" by Liu might be good, it covers a lot of the same material, but does it more arithmetically. There's also "An Invitation to Arithmetic Geometry" by Lorenzini Also, don't discount the "series" by Silverman: "Rational Points on Elliptic Curves" (with Tate), "Arithmetic of Elliptic Curves", "Advanced Topics in the Arithmetic of Elliptic Curves" and "Diophantine Geometry" with Hindry. - thank you so much for answer my question. actually, I think silverman's books did not use scheme theory. I hope can find some materials which can show me the power of scheme and sheaf cohomology. I know Lorenzini's book, but I think I don't like his writting style. – kiseki Apr 16 2010 at 12:52 Some chapters in Advanced Topics in the Arithmetic of Elliptic Curves by Silverman do use scheme theory. – David Corwin Jul 15 2010 at 16:11 In addition to the mentioned Cornell--Silverman book there is another Cornell--Silverman (+Stevens) collection named "Modular forms and Fermat's last theorem" (http://www.springer.com/mathematics/numbers/book/978-0-387-98998-3), which I can warmly recommend. It's available in paperback. The purpose of the volume is to cover the material used in the proof of Fermat's last theorem. Therefore a lot of arithmetic geometry is covered at a reasonable graduate-level (maybe a few more demanding surveys, though). Brian Conrad from previous comments is responsible for one nice paper in the volume. I especially like Tate's paper on Finite group schemes and Mazur's on deformation theory of Galois representations. - 1 That book contained my first published mention in mathematics so you get +1 ;-) – Kevin Buzzard Apr 17 2010 at 12:15 Thanks! :) Now I have to go find out where that citation/mention is. – Daniel Larsson Apr 17 2010 at 12:28 Considering it is now two full years since the OP asked this question this reply is (probably) purely for archival purposes if someone (like me) happens to stumble upon this question and finds it useful. Professor Emerton's detailed comment on Professor Tao's blog is incredibly useful as a roadmap found here. Also, Professor Ellenberg has a webpage for prospective students who wish to be advised by him. On it he has recommended books to read in pursuit of this path. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411349296569824, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/23991/special-relativity-and-e-mc2
# Special Relativity and $E = mc^2$ I read somewhere that $E=mc^2$ shows that if something was to travel faster than the speed of light then they would have infinite mass and would have used infinite energy. How does the equation show this? The reason I think this is because of this quote from Hawking (I may be misinterpreting it): Because of the equivalence of energy and mass, the energy which an object has due to its motion will add to its mass. This effect is only significant to objects moving at speeds close to the speed of light. At 10 per cent of the speed of light an objects mass is only 0.5 per cent more than normal, at 90 per cent of the speed of light it would be twice its normal mass. As an object approaches the speed of light its mass rises ever more quickly, so takes more energy to speed it up further. It cannot therefore reach the speed of light because its mass would be infinite, and by the equivalence of mass and energy, it would have taken an infinite amount of energy to get there. The reason I think he's saying that this is as a result of $E = mc^2$ is because he's talking about the equivalence of $E$ and $c$ from the equation. - – Qmechanic♦ Apr 18 '12 at 19:42 travel faster than the speed of light is still impossible. There is no scientific evidence. it seems speed of photons is a kind of cosmic limit – user8784 Apr 19 '12 at 20:53 ## 3 Answers I read somewhere that $E=mc^2$ shows that if something was to travel faster than the speed of light then they would have infinite mass and would have used infinite energy. Nope, not true. For a couple of reasons, but first, let me explain what $E = mc^2$ means in modern-day physics. The equation $E = mc^2$ itself only applies to an object that is at rest, i.e. not moving. For objects that are moving, there is a more general form of the equation, $$E^2 - p^2 c^2 = m^2 c^4$$ ($p$ is momentum), but with a little algebra you can convert this into $$E = \gamma mc^2$$ where $\gamma = \frac{1}{\sqrt{1 - v^2/c^2}}$. This factor $\gamma$, sometimes called the relativistic dilation factor, is a number that depends on speed. It starts out at $\gamma = 1$ when $v = 0$, and it increases with increasing speed. As the speed $v$ gets closer and closer to $c$, $\gamma$ approaches infinity. Armed with this knowledge, some people look at the formula $E = \gamma mc^2$ and say that, clearly, if a massive object were to reach the speed of light, then $\gamma$ would be infinite and so the object's energy would be infinite. But that's not really true; the correct interpretation is that it's impossible for a massive object to travel at the speed of light. (There are other, more mathematically complicated but more convincing, ways to show this.) To top it off, there is an outdated concept called "relativistic mass" that gets involved in this. In the early days of relativity, people would write Einstein's famous formula as $E = m_0 c^2$ for an object at rest, and $E = m_\text{rel}c^2$ for an object in motion, where $m_\text{rel} = \gamma m_0$. (The $m$ I wrote in the previous paragraphs corresponds to $m_0$ in this paragraph.) This quantity $m_\text{rel}$ was the relativistic mass, a property which increases as an object speeds up. So if you thought that an object would have infinite energy if it moved at the speed of light, then you would also think that its relativistic mass would become infinite if it moved at the speed of light. Often people would get lazy and neglect to write the subscript "rel", which caused a lot of people to mix up the two different kinds of mass. So from that, you'd get statements like "an object moving at light speed has infinite mass" (without clarifying that the relativistic mass was the one they meant). After a while, physicists realized that the relativistic mass was really just another name for energy, since they're always proportional ($E = m_\text{rel}c^2$), so we did away with the idea of relativistic mass entirely. These days, "mass" or $m$ just means rest mass, and so $E = mc^2$ applies only to objects at rest. You have to use one of the more general formulas if you want to deal with a moving object. Now, with that out of the way: unfortunately, the passage you've quoted from Hawking's book uses the old convention, where "mass" refers to relativistic mass. The "equivalence of energy and mass" he mentions is an equivalence of energy and relativistic mass, expressed by the equation $E = m_\text{rel}c^2$. Under this set of definitions, it is true that an object approaching the speed of light would have its (relativistic) mass approach infinity (i.e. increase without bound). Technically, it's not wrong, because Hawking is using the concept correctly, but it's out of line with the way we do things in physics these days. With modern usage, however, I might rephrase that paragraph as follows: Because energy contributes to an object's inertia (resistance to acceleration), adding a fixed amount of energy has less of an effect as the object moves faster. This effect is only significant to objects moving at speeds close to the speed of light. At 10 per cent of the speed of light, it takes only 0.5 per cent more energy than normal to achieve a given change in velocity, but at 90 per cent of the speed of light it would take twice as much energy to produce the same change. As an object approaches the speed of light, its inertia rises ever more quickly, so it takes more and more energy to speed it up by smaller and smaller amounts. It cannot therefore reach the speed of light because it would take an infinite amount of energy to get there. Disclaimer: all I've said here applies to a fundamental particle or object moving in a straight line. When you start to consider particles with components which may be moving relative to each other, the idea of relativistic mass kind of makes a comeback... kind of. But that's another story. - Thanks, I read it in Hawking's A Brief History Of Time. I think that's what he was saying. – Olly Price Apr 18 '12 at 19:44 Huh, I would expect Hawking to be reasonably accurate with these things. Though it is easy to make statements that can be misinterpreted, when talking about relativity. – David Zaslavsky♦ Apr 18 '12 at 19:56 I'll quote the book a bit later, I may be misinterpreting it. – Olly Price Apr 18 '12 at 20:01 OK, well, if you'd like to edit that quote into your question when you have a chance, I can update my answer to address it. – David Zaslavsky♦ Apr 18 '12 at 20:04 1 @Ron: No, but we should continue this in the chat room. – David Zaslavsky♦ Apr 20 '12 at 21:51 show 12 more comments I would blame the restriction on the speed at which objects can travel more on one of the two postulates that Einstein used to derive $E=mc^2$. I wish Einstein's original derivation of $E=mc^2$ was taught in schools! It is such an amazing piece of work if you go through it in detail. But it's also cryptic, jumping very quickly through ideas that apparently seemed pretty "obvious" to Einstein, but which are far from that to the rest of us. In any case, he derived it over the course of two papers. The first one defined the theory of special relativity, while the second and very short one derived his famous equation. It originally used $L$ for $E$, and Einstein never quite wrote it out the way we are used to seeing it. His first paper began with with two simple postulates, which are: (1) No test of mechanics or optics changes when you are moving without acceleration, and (2) The speed of light is always constant when measured from such a moving frame. Amazingly, that's all that is needed.[1] Now, if you want to point fingers at where exactly the idea that you cannot travel faster than light emerges from special relativity, I'd point at the second of Einstein's postulates: Every frame sees the same speed of light. So why is that important? Picture it this way: If light must always travel at $c$ from your perspective, what happens if you launch a rocket capable of traveling at very nearly $c$, and your rocket in turn sends out a light pulse ahead of itself? The rocket will see that pulse as traveling at $c$. However, as the person who launched the rocket, you must see something different, since otherwise the light beam emitted by the rocket would look like it's traveling at nearly $2c$, which would violate Einstein's second postulate. So, the light pulse in front of the spaceship must necessarily travel at $c$ from your perspective also, and that in turn means that the spaceship must always remain behind any pulse of light that can be emitted. If you draw that out on paper, you get this quirky result that from your view, objects moving closer and closer to the speed of light must nonetheless always remain behind an actual beam of light, since any other result would enable you to see light a light pulse moving faster than $c$. Objects thus wind up getting "flattened" against the barrier represented by the speed of an actual light beam. There are other consequences of this apparent flattening, which is called the Lorentz contraction, that I won't get into here. They include slowed time and increased mass, both of which can be derived from the original simple postulates that Einstein made. So, the bottom line: It's more accurate to blame Einstein's assumed postulate of constant light speed for limiting material objects to traveling at sub-light speed, rather than blaming $E=mc^2$. And historically, Einstein didn't even derive the $E=mc^2$ result until his second addendum paper, which he published after he had already shown the other consequences of his postulates. [1] Actually, there's an interesting minor secret buried in Einstein's postulates: One is missing. To ensure proper scaling of the results, you must add the following third postulate: If two groups of particles diverge from each other at speed $s$ along axis $x$, the orthogonal plane defined by the remaining two orthogonal axes $y$ and $z$ must remain invariant in scale between the two groups of particles. Or a lot less formally: $y$ and $z$ don't change, even though $x$ Lorentz contracts. That point seems so obvious that it's usually either assumed or treated as an outcome of the other two postulates. However, you can't really derive it from the other two postulates since an infinite number of profiles that meet the first two postulates are possible if you allow variable scaling of the $yz$ plane. Lorentz had noticed this, but his thoughts about it were largely forgotten after Einstein's papers. In any case, when talking about Lorentz contraction of $x$ it makes sense to be explicit about the invariance (or lack thereof) of the remaining two spatial axes. - this equation is at rest , when moving the energy goes as $E= \frac{m_{0}c^{2}}{(1- v^{2}/c^{2})^{1/2}}$ so if $v \ge c$ and the energy must be real, then the mass should be purely imaginary.. :D - ## protected by Qmechanic♦Apr 9 at 1:22 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9679238200187683, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/257044/solve-11-cdot-161-n-1-16n-n-1-10/257049
# Solve $11 \cdot 16^{1/(n-1)} = 16^{n/(n-1)} - 10$ This is probably an easy task for the users here, but I could not solve it. $$11 \cdot 16^{1/(n-1)} = 16^{n/(n-1)} - 10$$ Wolfram Alpha gives the result $n= 5$. What are the steps to solve this? - ## 2 Answers $11\times 16^{\frac{1}{n-1}}=16^{1+\frac{1}{n-1}}-10=16\times 16^{\frac{1}{n-1}}-10$, so $5\times 16^{\frac{1}{n-1}}=10$, so $16^{\frac{1}{n-1}}=2$, so $\frac{1}{n-1}=\frac{1}{4}$, so $n=5$. - Thank you for helping me. I will mark your answer as accepted as soon as possible. – Guy David Dec 12 '12 at 12:58 $$11 \cdot 16^{1/(n-1)} = 16^{n/(n-1)} - 10$$ $$11 \cdot 16^{1/(n-1)} = 16^{1+1/(n-1)} - 10$$ $$11 \cdot 16^{1/(n-1)} = 16\cdot16^{1/(n-1)} - 10$$ $$16\cdot 16^{1/(n-1)} - 11\cdot16^{1/(n-1)}=10$$ $$5\cdot 16^{1/(n-1)}=10$$ $$16^{1/(n-1)}=2=16^{1/4}$$ $$1/(n-1)=1/4$$ $$n=5$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355586767196655, "perplexity_flag": "middle"}
http://ontopo.wordpress.com/category/geometry/
on topology, computer science, life, and other stuff # Category Archives: Geometry ## Geodesics Posted on July 20, 2009 ##### Straight Lines When we think of a straight line, we usually think of a line in the Euclidean sense; that is, $c(t)=p+tX$, where $p$ is a point contained in the line, $t$ is a real number, and $X$ is a vector that points parallel to the line. If we consider Euclidean space as a manifold, we would say that $X$ is in the tangent space $T_{c(t)}(\mathbb E^n)$, because $c'(t)=X$. One important observation to make is that all along $c(t)$, $X$ never changes; i.e., we never accelerate. That is, if we move along the curve, we never speed up or slow down, and we never turn. In the language of my post on covariant derivatives, this is easy to express: $\nabla_{c'} c' \equiv 0$ The geometric interpretation is simple here: in the direction of the velocity vector, the velocity vector doesn’t change. You can probably see the punchline coming by now. If we generalize to a curve $c(t)$ on a manifold $M$, $c(t)$ is a geodesic if $\nabla_{c'} c' \equiv 0$. Now, you may notice that we can trace out the same curve if we tweak the parameter $t$ so that we could accelerate on the curve (we wouldn’t turn, but we could speed up or slow down). That is, we could have an alternate parametrization. But in order to have a geodesic, we need $\nabla_{c'} c' \equiv 0$, so $\nabla_{c'}\left<c',c' \right> = 2\left<\nabla_{c'} c',c' \right> = 0$, and therefore $\|c'\|$ is a constant along the curve. This gives us a unique parametrization of the curve, up to a constant scaling factor on the parameter. In fact, if we consider such a scaling factor, we get that $\nabla_{c'(st)} c'(st) = \nabla_{s c'(t)} s c'(t) = s^2\nabla_{c'(t)} c'(t) = 0$, so a geodesic with a constant scaling factor on its parameter is still a geodesic (and obviously has the same image). This motivates the following definition: if $\|c'\| = 1$ then the geodesic is called a normal geodesic. ##### The Exponential Map Say that some curve $\gamma(t)$ is a geodesic. Then $\nabla_{\gamma'}\gamma' = 0$ is a second-order differential equation in $t$. If we assume that $\gamma(0) = p$, and $\gamma'(0) = v$, then we have the required conditions for existence and uniqueness of a solution to the differential equation. That is, given a point $p\in M$ and tangent vector $v\in T_p(M)$, there is a unique geodesic $\gamma_v$ that passes through $p$ with velocity $v$. The exponential map $\text{exp}_p:T_p(M)\to M$ is defined as $\text{exp}_p(v) = \gamma_v(1)$, assuming that 1 is in the domain of $\gamma_v$. The exponential map is fairly important when talking about Riemannian manifolds, and it turns out that it is smooth and a local diffeomorphism. The latter means that there is a neighborhood around $p$ where its unique inverse exists. This inverse is the logarithmic map, or $\text{log}_p:M\to T_p(M)$. The exponential map is so important, in fact, that it appears in many of the important theorems in Riemannian geometry, like the Hopf-Rinow Theorem and the Cartan-Hadamard Theorem. It’s also essential to understanding the effects of curvature on a Riemannian manifold. ##### Arc Length At this point we can ask about the relationship between arc length and geodesics. Assume that we have some smooth function $\alpha : [a,b]\times(-\epsilon,\epsilon)\to M$. We can compute the change in arc length $L[c_s]$ over the family of curves $c_s = \alpha | [a,b]\times\{s\}$: $\frac d{ds}L[c_s] = \frac d{ds}\int_a^b\left<c_s'(t),c_s'(t)\right>^{1/2}dt = \int_a^b\nabla_S\left<T,T\right>^{1/2}dt$ $= \frac 1 2\int_a^b\left<T,T\right>^{-1/2}\nabla_S\left<T,T\right>dt = \int_a^b\left<T,T\right>^{-1/2}\left<\nabla_S T,T\right>dt$ The variables $S,T$ that we substitute here are fields of tangent vectors corresponding to the differential of $\alpha$ with respect to the variables $s,t$. The rest is just calculus. Since $s,t$ are independent of each other, we know that their derivatives commute and so we can say that $[T,V] = 0$. This means that we can make the switch $\nabla_S T = \nabla_T S$: $\frac d{ds}L[c_s] = \int_a^b\left<T,T\right>^{-1/2}\left<\nabla_T S,T\right>dt$ $= \int_a^b\left<T,T\right>^{-1/2}\left(T\left<S,T\right>-\left<S,\nabla_T T\right>\right)dt$ If we consider the curve $c_0$, and consider that we can always reparametrize a curve without loss of generality so that $l = \left<T,T\right>^{1/2}$ is a constant, $\frac d{ds}L[c_s]\mid_{s = 0} = l^{-1} \left(\left<S,T\right>\mid_a^b-\int_a^b\left<S,\nabla_T T\right>dt\right)$ This is called the first variation formula. The function $\alpha$ is called a variation. If we assume that all the $c_s$ are curves that join two points in $M$, then we know that $S$ vanishes at the endpoints. If we further assume that $c_0$ is a geodesic, then the integral vanishes (because $\nabla_T T = 0$). What this means is that geodesics are critical points of the arc length function $L$ for curves that join two points. We can’t claim that a geodesic segment minimizes the distance between two points (though there is a unique minimizing geodesic segment; for that we need the second variation formula, which I won’t get into in this post). To see this, consider the case when $M$ is a sphere, with the usual angular metric. If we consider any two distinct points, there is a great circle path that joins them that is of length the angular distance between them, $\delta$. However, there is also a path of length $2\pi - \delta$ that goes around “the long way” that joins the points as well. This path happens to be the longest one that you can take, and it’s also a geodesic segment. Obviously this would be a maximum of the first variation formula. It’s easy to see that the first variation formula gives us a lot of power in talking about the geometry of a Riemannian manifold. The source that I use actually motivates the definition of a geodesic from an effort to minimize the first variation formula. I prefer to motivate it from the “straight line” perspective. ##### Sources Much of this material comes from Comparison Theorems in Riemannian Geometry by Jeff Cheeger and David G. Ebin. Posted in Geometry, Topology Tagged covariant derivative, differential geometry basics, geodesic, metric ## Riemannian Connections Posted on June 25, 2009 For the project that I’m working on, I needed to know the basics of riemannian connections. Connections confused the hell out of me until I took a few days to really absorb them. I’m writing down my interpretation here so that I can burn it into the neurons, and hopefully help someone else trying to understand the same topic. ##### Covariant Derivatives of Scalar Functions A connection is also called a covariant derivative. One of the principles of differential geometry is that everything should behave the same regardless of which coordinate system you work in, so we’d like a way to get the derivative of a quantity when along an arbitrary direction. When we consider a scalar function f, the covariant derivative is just the directional derivative. If $X = \sum_{k=1}^n b_j E_j$: $\nabla_X f = Xf = \sum_{i=1}^n a_i \frac{\partial f}{\partial x_i}$ I found it extremely useful to think of the covariant derivative as a linear operator: $\nabla_X f = \left(\sum_{i=1}^n a_i \frac\partial{\partial x_i}\right)f$ ##### Covariant Derivatives of Vector Fields If we want to apply $\nabla_X$ to a vector field $Y$, then we can apply the operator: $\nabla_X Y = \left(\sum_{i=1}^n a_i \frac\partial{\partial x_i}\right)Y = \sum_{i=1}^n a_i \frac{\partial Y}{\partial x_i}$ Immediately we can see an interpretation for $\nabla_X Y$: see how $Y$ changes with respect to each coordinate direction, and then sum the resulting vectors together, weighted by each component of $X$. It’s easy to see how this gives us a coordinate-free derivative of a vector field. What we have right now is called an affine connection. ##### Affine Connections Affine connections have two properties; linearity in $X$ and the product rule on $fY$. This is immediate from the operator representation: $\nabla_{fU+gV} Y = f\nabla_U Y + g\nabla_V Y$ $\nabla_X fY = (\nabla_X f)Y + f\nabla_X Y$ This means that we can expand the representation in $X$: $\nabla_X Y = \sum_{i=1}^n a_i\nabla_{E_i}Y$ It should be pretty obvious that $\nabla_{E_i}Y$ is the same as $\partial Y/\partial x_i$, in that they both represent how $Y$ changes in the unit direction of $x_i$. If you’ve been paying attention, you’ve probably been wondering about how we compute these constructs. It’s fairly straightforward to assume that in Cartesian coordinates, we just differentiate each component of $Y$. What about in other bases? Well, assuming that $Y = \sum_{j=1}^n b_j E_j$, we can just apply the product rule on the terms: $\nabla_X Y = \sum_{i=1}^n a_i\nabla_{E_i} \sum_{j=1}^n b_j E_j$ $= \sum_{i=1}^n a_i \left(\sum_{j=1}^n \left(\nabla_{E_i} b_j\right) E_j + \sum_{j=1}^n b_j \nabla_{E_i} E_j\right)$ $= \sum_{i,j} a_i \frac{\partial b_j}{\partial x_i} E_j + \sum_{i,j} a_i b_j \nabla_{E_i} E_j$ In Cartesian coordinates, the second term is going to vanish, because the coordinate directions don’t change with respect to any direction. So our assumption about Cartesian coordinates is correct. In other bases, we can just think of the second term as a corrective factor for the curvature of the coordinate frames. In most texts, the vector $\nabla_{E_i} E_j = \sum_{k=1}^n \Gamma_{ij}^k E_k$ is defined, where the $\Gamma_{ij}^k$ are called Christoffel symbols. I won’t get into them here, except to say that they have some important symmetries. ##### Riemannian Connections If you’re familiar with this material, you may have noticed that I’ve hand-waved a lot. There’s a lot of machinery that needs to be set up to prove existence and uniqueness of all these constructs. It’s also machinery that works fairly well in Euclidean space, but we can’t make the same assumptions on general smooth manifolds. We’d like a connection that works on general manifolds, but we need to make some extra assumptions. A Riemannian connection is an affine connection with some extra properties: $\nabla_X Y - \nabla_Y X = \left[X,Y\right]$ $\nabla_X\left<U,V\right> = \left<\nabla_X U,V\right> + \left<U,\nabla_X V\right>$ Where $\left<\cdot,\cdot\right>$ is an inner product on the tangent space, and $\left[\cdot,\cdot\right]$ is the Lie bracket. The first condition imposes a restriction on the coordinate frames that states that the frames must be torsion-free; that is, the coordinate frames may not twist when moving in any particular direction. The second just imposes the product rule on the inner product. Euclidean space already has these properties, so the covariant derivative as I described it above is a Riemannian connection. These extra rules basically allow us to assume that a connection $\nabla$ is unique on any particular smooth manifold that has an inner product defined on its tangent space, and that we can use the above formula to write it out explicitly. There’s a lot more to it, of course, but we have enough to work with. I’ll be writing more posts that cover this topic, but I encourage you to read up on it yourself and derive your own intuition of what’s going on. → 2 Comments Posted in Geometry, Topology Tagged connection, covariant derivative, differential geometry basics, Riemannian ## Convexity Using Metric Balls Posted on March 23, 2009 I figure that I owe my readers a technical post, so while I’m riding home on the bus, I’ll write it up. This occurred to me when I was trying to figure out what to do on the ride. I have a nice gadget with a WordPress app, so why not? The project that I’m working on now involves defining a notion of convexity for a non-Euclidean space. There are any number of difficulties that you can run into when you attempt to define convexity on an arbitrary space, but I do have a few guarantees: • I’m on a manifold, so shapes make “sense,” albeit in a squishy way • I don’t have any limitations of convexity; that is, I can make a convex set as large as I like • Metric balls are convex So now I want to define a convex hull of a set of points in this space. I can do this in one of two ways. I can say that the convex hull is the convex set of minimal volume containing the set, or equivalently, that it is the intersection of all convex sets containing the set. I’d like to say that the intersection of all metric balls containing the set is the same as the convex hull (not just any convex superset of the set, mind you; specifically metric balls that are supersets of the set in question). I don’t necessarily need this lemma to be true, but it would be nice. The way to show that two sets are equivalent is usually to say that one contains the other, and vice-versa. It’s quite trivial to show that (in this space) the intersection of all metric balls that contain the set also contains the convex hull. Metric balls are, after all, convex. It’s trickier (to me) to show that the converse would also be true; that is, that the convex hull of a set also contains the intersection of all metric balls that contain the set. Any ideas? [Update: apparently there's a construction called a ball hull that is exactly the intersection of all metric balls containing a set. Perhaps it is essentially different from a convex hull.] → 5 Comments Posted in Geometry, Topology Tagged convex, hull, metric ## Spring Break Posted on March 16, 2009 Spring break is here, so it will be a good time to take a look at a couple things in addition to getting some work done. Here’s a couple things I’m looking at right now: As for work, among many other things, I’m trying to find a good “how-to” on deriving a curvature tensor. It seems that differential geometers like to leave these things as an “exercise.” Posted in Computing, Geometry, Theory, Topology ## Visualizing Hyperspheres Posted on March 12, 2009 Since all you in the blogosphere seem to love hyperspheres so much, here’s a link to someone who put together some visualizations of hyperspheres and polytopes in 4 dimensions: http://groups.csail.mit.edu/mac/users/rfrankel/fourd/FourDArt.html The approach is pretty cool, and some of the images are quite stunning. → 1 Comment Posted in Geometry Tagged art, high dimensions, hypersphere, visualization ## Reasoning in Higher Dimensions: Measure Posted on March 10, 2009 In a previous post on this topic, I said that hyperspheres get a bad rap. They’re doing their best to be perfectly round, and someone comes along and accuses them of being inadequate, or weird. It turns out that hyperspheres aren’t really weird at all. It’s measure that’s weird. And where measure is concerned, there are objects out there that truly display that weirdness. To recap, I was talking about how the volume of a unit hypersphere measured the normal way (with its radius = 1) approaches zero with increasing dimension. I also mentioned that even though a “unit” hypercube that circumscribes the unit sphere (i.e., a hypercube with inradius = 1) has volume that increases exponentially with the dimension (2d), a hypercube with circumradius = 1 decreases even faster than the volume of the hypersphere. Why is one configuration different than the other? The answer is that they’re not different. A cube is a cube, no matter how you orient it. If its side is of length s, then its volume is sd. What’s different here is our notion of unit measure. We commonly define a unit of volume as the volume of a hypercube with sides of unit length. In that light, it’s not terribly surprising what we know about the volume of hypercubes. So why can’t we just define the unit hypersphere to have unit volume? This seems objectionable until you realize that we do this all the time in the real world. What’s a gallon? It has nothing to do with an inch or foot. So why do we worry ourselves over defining volume in terms of one-dimensional units? The metric system doesn’t even adhere to this standard. A liter is a cubic decimeter. Why? It just worked out that way. Since these units are all just arbitrary, we could just declare that unit volume is the volume of a unit hypersphere. Or not. So a hypersphere’s volume really isn’t that weird. What seems weird is the discrepancy between the geometries of the hypercube and hypersphere. Are there objects that do act strangely in higher dimensions? Definitely. Consider a multivariate normal distribution (a Gaussian distribution in multiple dimensions). For the sake of simplicity, I’ll consider one with zero mean and variance σ2: $p(x) = \frac 1 {(2\pi)^{n/2}\sigma^n} \exp\left(\frac{\|x\|^2}{2\sigma^2}\right)$ Multivariate Gaussians are all nice and round. What can we say about what the distance from the mean (0) looks like? Well, this is just the variance: $\mathbf{E}\|X\|^2 = \mathbf{E}(X_1^2 + \dots + X_n^2) = n\sigma^2$ How much does it deviate from this value? We can apply a Chernoff bound (don’t ask me how; deriving Chernoff bounds is not my strong suit): $\mathbf{P}(\left|\|X\|^2 - n\sigma^2\right| > \epsilon n\sigma^2) \leq \exp\left(-\frac{n\epsilon^2}{24}\right)$ Let’s take another look at this bound, though. It’s saying that the probability of the squared distance from the mean deviating from nσ2 by more than a small percentage decreases exponentially with n. So the points that follow the distribution mostly sit in a thin shell around the mean. But the density function still says that the density is highest at the mean. Now that’s weird. Why does this happen, though? It’s difficult to get a handle on, but the word “density” is what you have to pay attention to. That shell has an incredibly high volume at higher dimensions (it grows with drd-1). High enough that the density is still lower at the shell than at the mean. Why it’s highest in the shell is even more difficult to figure out. I don’t have a good answer, but I suspect that it has something to do with the fact that the distribution must add up to one, and it has to “fill” all the nooks, and it can’t do that at the mean. It has to do this out in this thin shell. [Update: I guessed last night that if I multiply the p.d.f. by the boundary volume (i.e. "surface area") of a hypersphere of radius ||x||, then I should see spikes out at σ√n. I was correct. Below, Micheal Lugo confirmed that intuition slightly more rigorously in the comments. He's a probabilist, so I think I'm safe. :-) ] There are certainly weird things that happen in higher dimensions. In my opinion, all these things have more to do with measure than geometry. → 22 Comments Posted in Geometry Tagged distribution, gaussian, hypersphere, measure, Probability ## Reasoning in Higher Dimensions: Hyperspheres Posted on March 3, 2009 In my last post on higher dimensions, I alluded to the fact that I don’t agree completely with certain notions about higher dimensions. Specifically, I disagree with the idea that the intuition that you take for granted in low dimensions is necessarily ill-equipped to serve you in higher dimensions. Low-dimensional intuition is ill-equipped for many problems, and like most other topics in math, it’s usually most sensible to do the calculations anyway. Hyperspheres often get brought up with the subject of weirdness in higher dimensions, mostly because they’re easy to understand, and it’s easy to demonstrate the weirdness very quickly. But are they completely weird? Are the examples really fair, or are hyperspheres getting a bad rap? First, let’s get some notation out of the way. We often like to call a hypersphere an n-sphere, because it’s an n-dimensional manifold. Technically, one of these can exist in any metric space with more than n dimensions (because I’m talking about intuition, I’m assuming it’s Euclidean space). For simplicity, though, we’ll say that it lives in n+1 space, so that we can define it easily: $S^n = \left\{ x \in \mathbb{R}^{n+1} : \|x\| = r\right\}$ That’s not all that I want to talk about, though. I also want to talk about the volume of the n-sphere, and in that case, we often talk about a ball, which is just the interior of a sphere. The interior of an n-sphere is an (n+1)-ball, because if the sphere is an n-dimensional manifold, its interior is an (n+1)-dimensional manifold: $B^{n+1} = \left\{ x \in \mathbb{R}^{n+1} : \|x\| < r\right\}$ Or more simply: $B^{n} = \left\{ x \in \mathbb{R}^{n} : \|x\| < r\right\}$ The volume of this object has a somewhat simple formula: $V_n={\pi^\frac{n}{2}r^n\over\Gamma(\frac{n}{2} + 1)}$ Where Γ(x) represents the Gamma function (which is a tad more complicated). So where’s the counter-intuition? Say that we took the unit ball for all n > 0 and graphed its volume: Volume vs. dimension of unit n-ball This does seem a little odd. The volume goes up, hits a peak at 5, and then drops, and eventually bottoms out. In fact, with high enough dimension, you won’t see an n-ball have any volume at all. The limit of the volume of any n-ball as n goes to infinity is 0. That is weird. That’s not necessarily something that you’d expect. It also seems weird that the volume starts dropping after a while. But is all this really that strange? What if we fixed the radius at, say, 1/sqrt(π)? The volume vs. dimension is then just a decreasing function, even at low dimensions. Not surprising when you consider that radii less than 1 should make the volume diminish rapidly. So what about radii greater than 1? What if we fix r at say, 3? The volume peaks out at n = 56, and the volume is about 143 billion … somethings. After that, the volume diminishes back to zero again. All that we’re really saying here is that the geometry of the sphere dominates rn, but rn has enough power to dominate at low dimensions until the geometry cuts over. What’s so special about rn though? Why is this the gold standard by which we judge the hypersphere? It’s just the hypercube with sides of length r. In fact, the unit sphere is inscribed in a cube with sides of length 2r. What if we considered a hypercube of circumradius r instead of inradius r? That means that a sphere of radius r contains it. If that’s the case, then it has volume strictly less than the sphere’s volume. In fact, its volume is: $V_n=\left(2r\over\sqrt n\right)^n = {(2r)^n\over n^{n\over2}}$ which diminishes even faster than the sphere’s volume. So it can’t be the geometry of a cube that makes it keep its volumetric power. So what’s my point? This is all sounding very counterintuitive. My point is that when you talk about counter-intuition in higher dimensions, it’s helpful to talk about what’s actually going on, instead of maligning poor innocent constructs like the hypersphere. What’s actually going on? More about that later. But for now, consider this: no matter how many dimensions a sphere has, it’s always perfectly round, and perfectly isotropic. That’s intuition that isn’t lost in higher dimensions. [Someone posted this to Reddit! Thanks!] → 21 Comments Posted in Geometry Tagged high dimensions, hypersphere ## Reasoning in Higher Dimensions Posted on March 2, 2009 In celebration of WoBloMo, I give you my first attempt at an every-other-day post for the month of March. A recent conversation in a seminar that I attend reminded me of a post on Inductio Ex Machina that I read a bit ago. The take-home message of the post was “What seems obvious in two and three dimensional space does not hold in 10 dimensions or higher.” I can agree with that statement to a certain degree, but I have some reservations about saying that it’s true generally. I was going to write a couple quick blurbs about how I do/don’t agree, but these keep branching into larger ideas. In the interest of having material to blog about this month, I’m going to develop these ideas later. To be continued. → 2 Comments Posted in Geometry Tagged high dimensions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 100, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387121200561523, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/189769-how-many-ways-can-we-assign-professors-print.html
# how many ways can we assign the professors Printable View • October 7th 2011, 02:37 PM wopashui how many ways can we assign the professors A university needs to assign professors to 6 math courses. there are 3 professors available and all of the courses are taught at different times. In how many ways can the professors be assigned to each professor. A) there are 6 different courses and there are no restrictions on how many courses can be assigned to each professor should this just be 6! B) there are 6 different courses and each professor must be assigned exactly 2 classes. consider the 2 courses as a repeat, we have 6!/(2!2!) need someone to verify this for me • October 7th 2011, 03:01 PM Plato Re: how many ways can we assign the professors Quote: Originally Posted by wopashui A university needs to assign professors to 6 math courses. there are 3 professors available and all of the courses are taught at different times. In how many ways can the professors be assigned to each professor. A) there are 6 different courses and there are no restrictions on how many courses can be assigned to each professor B) there are 6 different courses and each professor must be assigned exactly 2 classes. This question is really vague! a) If we take the phrase "no restrictions" quite literally the answer is $3^6$. That is the number of functions from a set of six to a set of three. But of course that means one professor might be assigned all six. One the other hand, if each prof. must have at least one course then the answer is 540. That is the number of surjections from a set of six to a set of three. B) This a much easier question. The answer is $\frac{6!}{2^3}$. That is the number of ways to arrange the string $AABBCC$. For example the arrangement, $ABCCAB$ means Prof A teaches the first and fifth courses; Prof B teaches the second and sixth course; etc. • October 8th 2011, 08:25 AM Soroban Re: how many ways can we assign the professors Hello, wopashui! Quote: A university needs to assign 3 professors to 6 different math courses. All of the courses are taught at different times. In how many ways can the professors be assigned to each professor: A) if there are no restrictions on how many courses . . can be assigned to each professor. I see no vagueness. It states clearly that all six courses could be assigned to one professor. Each course has a choice of 3 professors. The number of ways is: . $3^6 \,=\,729$ Quote: B) if each professor must be assigned exactly 2 classes. This is an "ordered partition" problem. The number of ways is: . ${6\choose2,2,2} \:=\:\frac{6!}{2!\,2!\,2!} \:=\:90$ • October 8th 2011, 08:25 AM wopashui Re: how many ways can we assign the professors should the first part be 6^3 instead, sine we are assigning profs to classes, not classes to profs • October 8th 2011, 01:13 PM Plato Re: how many ways can we assign the professors Quote: Originally Posted by wopashui should the first part be 6^3 instead, sine we are assigning profs to classes, not classes to profs How did I know that you were vague on this question? $6^3$ is the number of ways to assign each element of a set of three to one element in a set of six. Doing it that way, each professor is assigned exactly one course. That means that three courses are left uncovered. Do you think that is what the question means? • October 8th 2011, 09:18 PM wopashui Re: how many ways can we assign the professors then we can assign the remaining 3 courses again to the three profs, will this work? • October 9th 2011, 03:54 AM Plato Re: how many ways can we assign the professors Quote: Originally Posted by wopashui then we can assign the remaining 3 courses again to the three profs, will this work? Actually, I was trying to show you that your reading does not work. I should have been clearer. Thinking of this as assigning the professors to the classes in $6^3$ ways means two professors can be assigned to the same class. That is clearly not what the question is about. Without, any restrictions the answer to part A) is $3^6$. That is the only way that all six courses have a instructor of record. • October 9th 2011, 02:32 PM wopashui Re: how many ways can we assign the professors Pizza Pizza has 12 different deliveries to make and 3 drivers. In how many ways can the deliveries be made by the 3 drivers so that one driver makes 3 deliveries, another driver makes 5 deliveries, and the third driver makes the remaining deliveries? this is an example given in class, we have this as 12!/(3!5!4!), so i was thinking we can have 6! for the profs question • October 9th 2011, 03:03 PM Plato Re: how many ways can we assign the professors Quote: Originally Posted by wopashui Pizza Pizza has 12 different deliveries to make and 3 drivers. In how many ways can the deliveries be made by the 3 drivers so that one driver makes 3 deliveries, another driver makes 5 deliveries, and the third driver makes the remaining deliveries? this is an example given in class, we have this as 12!/(3!5!4!), so i was thinking we can have 6! for the profs question Well yes if each professor teaches two of those courses: $\frac{6!}{2^3}$ All times are GMT -8. The time now is 03:07 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493660926818848, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/13184/can-an-object-accelerate-to-infinite-speed-in-finite-time-newtonian
# Can an object accelerate to infinite speed in FINITE time (Newtonian)? Obviously this is impossible in relativity; however, if we ignore relativity and use only Newtonian mechanics, is this possible? How (or why not)? - The question as posed is trivial. If the force as a function of time F(t) integrates to infinity, then of course the answer is yes. – Ben Crowell Aug 5 '11 at 2:45 @Ben Crowell, what is infinite speed? It's nonsense... – Andyk Aug 5 '11 at 6:20 Vector quantities like velocity etc. cannot be infinite. This is meaningless. You should reformulate your question (the title). – Andyk Aug 5 '11 at 16:56 @ANKU: It's not meaningless - please see BebopButUnsteady's answer – BlueRaja - Danny Pflughoeft Aug 5 '11 at 17:23 @ANKU One comment about one topic is enough. – mbq♦ Aug 5 '11 at 20:20 ## 4 Answers The answer is yes in some unintersting senses: Take two gravitational attracting point particles and set them at rest. They will attract each other and their velocity will go to $\infty$ in finite time. Note this doesn't contradict conservation of energy since the gravitational potential energy is proportional to $-1/r$. This isn't so interesting since it's just telling you that things under gravity collide. But its technically important in dealing with the problem of gravitationally attracting bodies. Now a more intersting question: Is there a situation where the speed of a particle goes to infinity without it just being a collision of two bodies? Suprisingly, the answer to this question is yes, even in a very natural setting. The great example is given Xia in 1995 (Z. Xia, “The Existence of Noncollision Singularities in Newtonian Systems,” Annals Math. 135, 411-468, 1992). His example is five bodies gravitational interacting. With the right initial conditions one of the bodies can be made to oscillate faster with the frequency and amplitude going to infinity in finite amount of time. Added Here is an image. The four masses $M$ are paired into two binary systems which rotate in opposite directions. The little mass $m$ oscillates up and down faster. It's behavior becomes singular in finite time. - Amazing! Just the answer I was looking for. Here is a link to the paper mentioned, and here is an article trying to explain it in laymen's terms. Could you perhaps add some more information about the "five bodies interacting?" – BlueRaja - Danny Pflughoeft Aug 4 '11 at 18:26 We can tell the same for atracting point charges. As the distance r decreases the force between them increases. Taking limit $r\to 0$ you get $F\to \infty$, therefore $u\to \infty$. But this means that the speed aproaches the infinity, not that the velocity is infinite. Take a look at definition of the limit. – Andyk Aug 4 '11 at 19:07 I'm having trouble understanding your diagram. I'm fine with the fact that $m$ is a comparatively small mass, but where does it go when it oscillates to infinite speeds? Does it get out of the gravity well eventually? And how do the 2 binary systems maintain the distance from each other? Is it supposed to be implied that they're rotating, or that they're oscillating as well? – AlanSE Aug 4 '11 at 20:32 – mmc Aug 4 '11 at 22:51 @Zassounotsukushi: Imagine the left-half of the position-vs-time graph `1/t`, with `t < 0`. No matter what speed you give me, I can find a time `t < 0` where the object is moving at an even greater speed; thus at the limit, the object's speed is infinite. Xia found a case where, under Newton's laws, this happens (though the graph behaves a bit more like `(1/t) sin(1/t)`). At `t=0`, there's what we call a singularity; I'm not really sure how to answer the question of where the object is, or what it looks like, at or after `t=0`, partly because singularities don't make physical sense. – BlueRaja - Danny Pflughoeft Aug 17 '11 at 16:52 show 1 more comment It sounds like the answer is yes for particles with mass but zero physical extent. I assume if the particles have finite extent (or we bring in quantum mechanics), then the answer is no, as it would be if we brought in relativity. - If by "ignoring relativity", you mean ignoring the fact that nothing can move faster than the speed of light, then the answer is still no. Since kinetic energy is proportional to the square of the speed, infinite speed would mean infinite energy, which you cannot provide, whatever the amount of time you are considering. - I'm sure the quest to accelerate to infinity will be easier due to the numerous physically implausible inventions that are possible in this world without relativity :-P – AlanSE Aug 4 '11 at 18:52 2 Others have provided simple counterexamples to your statement. – Ben Crowell Aug 5 '11 at 2:36 Lab Reference Frame Let's start assuming a force is exerted on the object from equipment "lab" reference frame, meaning that there is negligible recoil on the lab frame from accelerating the object. Not only is it the case that $F=ma$, but the power delivered to the object grows as $P=v F$. Take a device like the large hadron collider, and just completely wave off the technical difficulty of applying force to an object of progressively increasing velocity, and let's not the following. $$v' = a = \frac{F}{m} = \frac{P}{v m}$$ The requirement is that $v \rightarrow \infty$ in a finite time. You know, just for fun let's use an actual functional form. $$v(t) = -\frac{1}{t}$$ For t from $-\infty$ to $t=0$. $$P = m v' v = -\frac{m}{t^3}$$ So the power delivered must increase at the hilariously fast rate of $1/t^3$ as $t$ goes to zero, not that it matters because we all knew this would result in requiring infinite energy, but this just shows exactly how intangible it is. Rocket In the case of a rocket the propellant has to be hauled along with the payload, but the tradeoff is that you then don't have the $P=v F$ proportionality, since the propellant itself is still moving after being ejected. The equation of motion for a rocket that starts with mass $M$ and ends with mass $m$, ejects propellant with speed $v_e$ is as follows. I'll add in the approximation for $M \gg m$, because obviously that must be the case since we're talking about going to infinity. $$v = v_e \ln{ \frac{M}{m} }$$ This is certainly interesting. It is interesting to note that the speed the rocket can reach is proportional to the speed the propellant is ejected at, which is also not limited by the speed of light. The energy required to expel that propellant also isn't a problem (haha) because it is not the case that $E=mc^2$ in this world. But it would be skirting the problem to say that the propellant is ejected at infinite speed, so we still seek a way for the above expression to limit to infinity with $v_e \neq \infty$. But we would also really like $\frac{M}{m} \neq \infty$, because that would require either an infinitely large starting mass or an infinitely small ending mass. Neither of these options are appealing, unless atoms also don't exist in this world, allowing us to use fractal math to claim that an infinitely small chuck of something went to infinite speed. To the extent that we don't make these absurd assumptions, your request is impossible, even given the already absurd assumption of allowable superluminal speed. - 1 "Let's start assuming a large reference frame acts on the object." What does this mean? A frame of reference is not an object that exerts forces on other objects. – Ben Crowell Aug 5 '11 at 2:37 @Ben Edited the answer for clarity. – AlanSE Aug 5 '11 at 2:48 Will somebody PLEASE explain the downvotes, because as far as I can tell this is a useful and thoughtful answer. If there is something that you know that I don't, of course I want to hear it. – AlanSE Aug 5 '11 at 12:49 1 Downvoted because it's too limited in imagination --- as the top comment makes clear. There seems to be a great deal of "classical" work that people are in the dark about. You've done a great job of explaining a particular situation, but clearly failed to account for the full general case. – genneth Aug 6 '11 at 16:01 @genneth That's fair. I would just add that the case of two passing charges fully satisfies the requirement of delivering the rate of power specified, that being the $1/t^3$ form I gave. It's interesting to put things in these terms, because it shows that no answer can really be "right". Since it's a made-up world, quantum mechanics could prohibit the flyby that leads to infinite velocity... or it could not. – AlanSE Aug 6 '11 at 22:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476634860038757, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/10262/home-made-lattice-calculation
Home-made lattice calculation? The topic of Lattice QCD or Lattice gauge theory or even Lattice field theory is quite old now. And the main reason for the interest in the topic is the ability to calculate nonperturbative stuff on a computer. It seems that to do research with lattice you need an access to some supercomputer. But now everyone can afford something that is as powerful as supercomputers 20 years ago. Maybe it possible to redo some of the results of that time? Is there any project that is: 1. Relatively simple. 2. Allows to calculate some real-world quantity (like mass of the proton). 3. Can be done with an average home computer. - Do you want to program something yourself, and if so, what is your skill level? – Gerben May 23 '11 at 11:52 @Gerben , yes I want to program myself. Preferably from scratch. Assume that I have all the experience needed in numerics, computing, CS, low-level or parallel programming e.t.c. – Kostya May 23 '11 at 12:18 4 Answers While not strictly lattice QCD, Michael Creutz' 30 year old lattice gauge papers have very simple C implementations (!). For example, look at this paper, which gives a very readable explanation of lattice gauge simulations, with source code: http://latticeguy.net/mypubs/pub165.pdf The source code is also available here: http://thy.phy.bnl.gov/~creutz/z2/ This compiles and runs out of the box and reproduces the results in the paper. Current papers like the portugese GPU groups' mentioned by Lubos all build on Creutz's 30 year old stuff. - Now that is the answer I was waiting for. Thanks a lot for the links. – Kostya Feb 15 '12 at 17:40 You don't need a really big computer. Peter LePage used to do talks where he'd ask the audience fro a "random" number as the beginning of the talk (but not 7, 17, 42, or 69 'cause he'd already done those) and start a simulation on one screen with that number as a seed. Then he'd give a talk on how to speed up LQCD calculations on the other screen while his PII 200 MHz laptop crunched a $b\bar{b}$ state during the talk. First, heavy flavor is easier to do than anything involving light quarks. Especially $u$ or $d$ quarks. You certainly want to learn this business with a faster problem than the proton mass. Second, "relatively simple" in LQCD is still pretty complicated. Peter's 45 minute talk was intended to give people who knew roughly what a naive LQCD calculation looked like an idea of how his very clever trick for optimizing the business worked. I also sat through four hours of the same thing at a graduate summer school where he was teaching the technique and I only sort of got it (because it was targeted at students who were doing LQCD). This is a big project. - In the same vein, I'd suggest something more modest than true QCD, like finding the phase diagram of the Higgs sector. Even setting that up properly will take quite some time, and I know for a fact that (when done efficiently) it's not a very expensive calculation. For ideas on how it's done, check out Montvay/Münster (Quantum fields on a lattice), or one of the countless other canonical references of the field. – Gerben May 23 '11 at 18:23 You may try to download QCD software for CUDA - using GPU etc. - at http://nemea.ist.utl.pt/~ptqcd/ See also But I don't have any experience and I don't expect the work to be a user-friendly smooth sailing. ;-) - I don't have any experience with CUDA itself (apart from having read some articles about it), but I wouldn't recommend to start with it; it's a lot more technical, meaning that you need a lot more computer and programming knowledge to do something for yourself, for relatively little benefit to the home user. – Gerben May 23 '11 at 11:49 Yes you should try to find an unoptimized good ol' C-code with suitable documentation.. then post the links here :) – Bjorn Wesen May 23 '11 at 12:14 Actually I'm perfectly fine with CUDA... – Kostya May 23 '11 at 12:19 Usually, CUDA-optimization is a heavy work by itself, and the CUDA specifications change continually with new GPUs arriving 2 times a year. So one usually has to make the decision on which to concentrate on - the physics or the computer optimization. Cardosos papers linked above concentrate mainly on the implementation details with regards to CUDAfication, and hardly describe anything about the results they extract or the background physics. If you are already an expert at lattice qcd, then it will be fun of course to implement on the GPU:) – Bjorn Wesen May 23 '11 at 12:35 – Vladimir Kalitvianski May 23 '11 at 13:32 If you don't have much experience with Monte Carlo simulations, you might want to start with simulations of the 2d Ising model, which is a lot less complex than SU(N) Yang-Mills or QCD. Monte Carlo simulation is mostly an art, since we don't usually have rigorous error bounds. You'll learn the art faster if you can reduce the time your simulations take, and 2d Ising model sims are extremely fast. You can do them in Python on a modern laptop in a matter of minutes. Try tuning the hopping parameter until you see the phase change. See if you can extract critical exponents and check them against the exact solutions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459109306335449, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/184135/trigonometry-why-we-need-to-relate-to-circles
# Trigonometry- why we need to relate to circles I'm a trigonometry teaching assistant this semester and have a perhaps basic question about the motivation for using the circle in the study of trigonometry. I certainly understand Pythagorean Theorem and all that (I would hope so if I'm a teaching assistant!) but am looking for more clarification on why we need the circle, not so much that we can use it. I'll be more specific- to create an even angle incrementation, it seems unfeasible, for example, to use the line $y = -x+1$, divide it into even increments, and then draw segments to the origin to this lattice of points, because otherwise $\tan(\pi/3)$ would equal $(2/3)/(1/3)=2$. But why mathematically can't we do this? - There's no reason we can't do this-it just wouldn't define a useful function. – Kevin Carlson Aug 19 '12 at 0:11 But if we could, the tangent function wouldn't be well-defined correct, given the above example? – Erik Aug 19 '12 at 0:14 I guess what I'm getting at is why can't we do this and be able to yield the same outputs as we would when defining the trig functions in terms of the circle. – Erik Aug 19 '12 at 0:15 1 I'm not exactly sure what you're asking. You can define angles that way, but then you lose rotational invariance. – copper.hat Aug 19 '12 at 0:33 @copper.hat that was what I was asking. And I was asking for more mathematical insight into that fact. – Erik Aug 19 '12 at 2:33 ## 4 Answers Given your comments, one of the biggest problems with the construction you've offered is that it can't define the trig functions for angles of all real values, or even for every angle between 0 and $2\pi$, since rays from the origin hit your line at angles between $-\frac{\pi}{4}$ and $\frac{3\pi}{4}$. So we'd want to at least propose some other closed curve, say a smooth one since the trig functions are smooth, as the place where we define our functions. Perhaps the simplest reason that the circle is a natural place to define trigonometric functions is that angle is a measure of arc of a circle. - I appreciate your feedback. But I would really like to know why the above construction doesn't yield the same tangent values as the definition in terms of the circle. – Erik Aug 19 '12 at 0:23 I don't follow your computation, after looking closer. Are you drawing the right triangle with hypotenuse from the origin to the line $-x+1$, legs along the $x$-axis and parallel to the $y$-axis, and hypotenuse at angle $\pi/3$ from the $x$-axis? The end of the hypotenuse for this is at $(\frac{1}{1+\sqrt{3}},\frac{\sqrt{3}}{1+\sqrt{3}}),$ which would give the correct tangent of $\sqrt{3}$. – Kevin Carlson Aug 19 '12 at 0:29 Kevin, your last comment got at my main question. Why must angle be defined to be the measure of arc of a circle? I'm quite confident you're correct but am unsure why this needs to be the case. – Erik Aug 19 '12 at 0:44 What I did was not actually use an angle pi/3 but consider the angle formed by the positive x-axis and the line connecting the origin to the point (1/3, 2/3), which is two-thirds of the way from (1,0) to (0,1). I guess my question in this context might be stated: why is such an angle formed not 2/3 of pi/2? – Erik Aug 19 '12 at 0:46 1 @Erik: Because the whole thing arose more than two millenia ago when people were doing planet and star observations. The only measurable thing was angular distances between heavenly bodies, as observed from Earth. – André Nicolas Aug 19 '12 at 6:01 show 3 more comments Norman Wildberger's book Rational Trigonometry shows that one can do an immense amount of trigonometry and applications to geometry without any parametrization of the circle by arc length. He treats triangles largely without mentioning circles. Notice that the squares of the sine and cosine are rational functions of the slopes of two lines meeting at an angle. One can deal with those rational functions without dealing with any parametrization of the circle by arc length. Wildberger doesn't deal with sines and cosines, but with their squares. In an $n$-dimensional space, the angle between two vectors depends on the equivalence classes to which they belong, where two vectors are equivalent if one is a scalar multiple of the other. If you call such an equivalence class a "slope" then you still get the squares of the sine and cosine as rational functions of the slope. Of course, one thereby gives up the ability to deal with Fourier sine- and cosine-series. So there's a trade-off: some efficiency is gained and the ability to do some things is lost. - I'll have to give your response some thought. I appreciate it. Maybe either you, Michael, or Kevin have answered my essential question, but I'm not sure. My main question is referenced in the example in the original question - if we draw a line from the origin to (1/3, 2/3), which is two-thirds of the way from (1,0) to (0,1), and reason that the angle formed is pi/3 since pi/2 times 2/3 is pi/3, we now find that the "tangent" value is 2, inconsistent with the actual value tan(pi/3)=√3. – Erik Aug 19 '12 at 0:37 1 My answer is of course in a sense a partial answer, dealing with a point of view that is not the usual one. – Michael Hardy Aug 19 '12 at 1:49 ....and now I've posted another partial answer, looking at the question from yet another point of view. – Michael Hardy Aug 19 '12 at 3:20 Suppose that instead of parametrizing the circle by arc length $\theta$, so that $(\cos\theta,\sin\theta)$ is a typical point on the circle, one parametrizes it thus: $$t\mapsto \left(\frac{1-t^2}{1+t^2}, \frac{2t}{1+t^2}\right)\text{ for }t\in\mathbb{R}\cup\{\infty\}. \tag{1}$$ The parameter space is the one-point compactification of the circle, i.e. there's just one $\infty$, which is at both ends of the line $\mathbb{R}$, rather than two, called $\pm\infty$. So $\mathbb{R}\cup\{\infty\}$ is itself topologically a circle, and $\infty$ is mapped to $(-1,0)$. Now do some geometry: let $t$ be the $y$-coordinate of $(0,t)$, and draw a straight line through $(-1,0)$ and $(0,t)$, and look at the point where that line intersects the circle. That point of intersection is just the point to which $t$ is mapped in $(1)$. Later edit: an error appears below. I just noticed I did something dumb: the mapping between the circle and the line $y=1-x$ that associates a point on that line with a point on that circle if the line through them goes through $(0,0)$ is not equivalent to the one in $(1)$ because the center of projection is the center of the circle rather than a point on the circle. end of later edit This mapping is in a sense equivalent to the one you propose: I think you can find an affine mapping from $t$ to your $x$ on the line $y=-x+1$, such that the point on the circle to which $t$ is mapped and the point on the circle to which $x$ is mapped are related by linear-fractional transformations of the $x$- and $y$-coordinates. The substitution $$\begin{align} (\cos\theta,\sin\theta) & = \left(\frac{1-t^2}{1+t^2}, \frac{2t}{1+t^2}\right) \\[10pt] d\theta & = \frac{2\,dt}{1+t^2} \end{align}$$ is the Weierstrass substitution, which transforms integrals of rational functions of sine and cosine, to integrals of simply rational functions. I'm pretty sure proposed mapping from the $(x,y=-x+1)$ to the circle would accomplish the same thing. - I appreciate the sophisticated response and the time you took to come up with it. I'll give it some thought. Unfortunately I can't give anybody an up-vote because I just joined the site. Otherwise, I'd give you a few for your two answers if I could. – Erik Aug 19 '12 at 3:30 @Erik : There is a mistake in the later part of this answer. See the "later edit" above. – Michael Hardy Aug 21 '12 at 22:01 The reason we use the unit circle instead of your idea is that $\cos$ and $\sin$ parametrize the unit circle very naturally in terms of our familiar coordinates. I mean, $x^2+y^2=1$ is parametrized as $x=\cos(\theta),$ $y= \sin(\theta)$. So,using the unit circle, you can just read off the values of sine and cosine from the $x,y$ coordinates of the points on the unit circle. This is a useful thing, both computationally and conceptually. While I suppose one could--in theory--parametrize the line $y=1-x$ using the Cosine and Sine, the parametrization will be nowhere near as neat or useful as it was above. In fact, it will be very ugly, so you can no longer read the values of sine and cosine off from the coordinates as we could in the prettier example of using the unit circle. - A polar parametrization of a line is useful for converting angular momentum to linear momentum (or vise versa). – Baby Dragon Aug 22 '12 at 1:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505246877670288, "perplexity_flag": "head"}
http://conservapedia.com/Standard_deviation
# Standard deviation ### From Conservapedia $\frac{d}{dx} \sin x=?\,$ This article/section deals with mathematical concepts appropriate for a student in late high school or early university. Standard deviation is a measure in statistics of the dispersion of a set of values (represented as X). It is defined as the square root of the variance of these values, where variance is defined as $\sigma^2 = \operatorname{E}[(X-\operatorname{E}[X])^2] = \operatorname{E}[X^2] - (\operatorname{E}[X])^2$ where the expected value of X is E(X). Thus the standard deviation is $\sigma = \sqrt{\operatorname{E}[(X-\operatorname{E}[X])^2]} = \sqrt{\operatorname{E}[X^2] - (\operatorname{E}[X])^2}$ The formula for standard deviation must not be confused with the formula $S_{n} = \sqrt {\sum_n(X_n - \bar X)^2 \over n - 1}$ (where $\bar X = {\sum_n X_n \over N}$ is the sample mean). which is the formula for a point estimate of the true standard deviation from a sample size of n. As such this statistical estimator itself has a variance which, as the formula indicates, decreases as the sample size increases.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9115262031555176, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/9842/where-is-the-particle-during-a-tunneling-event?answertab=votes
# Where is the particle during a tunneling event? If, say, a particle with energy $E<V_0$, approaches a finite potential barrier with height $V_0$, and happens to tunnel through, where would the particle be during the time period when it is to the left of the potential barrier and to the right of the potential barrier? Surely there must be a finite amount of time for it to travel through to the other side, unless it simply teleports there? If it travels through with energy less than $V_0$, however, doesn't that mean it cannot enter in the region of the potential barrier? - 3 It is precisely where its probability density is telling you it should be; there are no additional special effects here that differ from other QM situations. Why do you think it is a problem for it be located inside of the barrier? That location is only classically unreachable. But this is quantum mechanics... – Marek May 14 '11 at 8:29 ## 4 Answers Isn't the whole point here that one cannot say where the particle IS exactly? One can only calculate the probabilities of it being at one place. Tunneling means the probability of it being inside the barrier isn't zero (since we want the probability distrubition to be continuous). There is always penetration of the wave function into the barrier. IMHO tunneling means the penetration goes deep enough to actually reach the other side, so the wave function of the particle is propagated further on that side too, meaning there's a chance the particle went through. During the passing the particle has had a chance of being inside the barrier. I don't know if it's correct to say that when the particle has passed, it has been inside the barrier, but that just because the notion of the particle actually being somewhere is somewhat wrong. - As Marek sais, the particle may be found "inside" the barrier, if you like. It means you can really find it there as well as outside. But in QM a particle is a wave and is "created" by the whole volume involved. It is not permanently "localized" or "concentrated". - “…where would the particle be during the time period when it is to the left of the potential barrier and to the right of the potential barrier?” The answer is in the barrier as others above have stated. I think a key point that would help (IMHO) to explain what is meant by locating a particle in the barrier. Since the particle’s wave function has some probability of being in the barrier then in quantum mechanics one should be able to observe it in the barrier. So to observe this negative kinetic energy particle in the barrier, one has to localize it. To localize implies to shine a short enough photon in order to locate the particle inside the barrier. When this is done, the negative kinetic energy particle now has an increase in energy from the photon; therefore, the negative kinetic energy particle now has acquired a positive kinetic energy and is found outside the barrier, not in the barrier. So the information of this positive energy particle tells us the details of the particle in the barrier. - The question "how long does it take for the particle to tunnel?" seems to be more interesting than "where is the particle during a tunneling". Quantum mechanics predicts that for an opaque barrier the delay time is independent on the length of the barrier (see Hartman effect) with the obvious problems when the barrier becomes very long. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9601625800132751, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/280449/help-figuring-out-all-the-alternative-solutions-to-the-integrals-of-sine-and-cos
# Help figuring out all the alternative solutions to the integrals of sine and cosine I always worry a lot when doing integrals with trigonometric functions because there's always many ways to write the final answer. I am trying to figure out the general pattern for the various different solutions. The integral $$\int \sin^3\left(x\right)\cos^2\left(x\right)\, \mathrm{d}x$$ My answer (which never seems to be the same with wolfram alpha's): $$\frac{\cos^5\left(x\right)}{5} + \frac{-\cos^3\left(x\right)}{3} + C$$ wolfram alpha lists many more: $$\cos^3\left(x\right)\left(\frac{1}{10}\cos\left(2x\right)-\frac{7}{30}\right) + C$$ $$\frac{1}{240}\left(-30\cos\left(x\right) - 5\cos\left(3x\right)+3\cos\left(5x\right)\right)+C$$ $$\frac{\cos^5\left(x\right)}{80} -\frac{\cos^3\left(x\right)}{48}-\frac{\cos\left(x\right)}{8}-\frac{1}{8}\sin^2\left(x\right)\cos^3\left(x\right) + \frac{1}{16}\sin^4\left(x\right)\cos\left(x\right)+\frac{1}{16}\sin^2\left(x\right)\cos\left(x\right) + C$$ Could anyone help me understand what integration procedure would result in those answers? - 1 You could, of course, get these other forms of the answer not by a different integration procedure, but by using trigonometric identities to manipulate them. – Hurkyl Jan 17 at 5:38 ## 2 Answers Putting $\cos x=z,dz=-\sin xdx$ $$\int \sin^3\left(x\right)\cos^2\left(x\right)\, \mathrm{d}x=\int (1-z^2)z^2(-dz)=\int z^4 dz-\int z^2 dz=\frac{z^5}5-\frac{z^3}3+c=\frac{\cos^5x}5-\frac{\cos^3x}3+C$$ (i) $$\frac{\cos^5x}5-\frac{\cos^3x}3=\frac{\cos^3x}{15}(3\cos^2x-5)=\frac{\cos^3x}{30}(6\cos^2x-10)=\frac{\cos^3x}{30}\{3(1+\cos2x)-10\}$$ as $\cos2x=2\cos^2x-1$ $$\implies \frac{\cos^5x}5-\frac{\cos^3x}3=\frac{\cos^3x(3\cos2x-7)}{30}$$ (ii) Using the Euler's Identity $e^{iy}=\cos y+i\sin y\implies e^{-iy}=\cos (-y)+i\sin(-y)=\cos y-i\sin y\implies 2\cos y=e^{iy}+e^{-iy}$, $(2\cos x)^5=(e^{ix}+e^{-ix})^5=(e^{5ix}+e^{-i5x})+\binom5 1(e^{3ix}+e^{-i3x})+\binom5 2(e^{ix}+e^{-ix})=2\cos5x+10\cos 3x+20\cos x$ Similarly, $(2\cos x)^3=(e^{ix}+e^{-ix})^3=e^{3ix}+e^{-3ix}+3(e^{ix}+e^{-ix})=2(\cos 3x+3\cos x)$ Put the values of $\cos5x,\cos 3x$ is the integration result to get $$\int \sin^3\left(x\right)\cos^2\left(x\right)\, \mathrm{d}x=\frac{\cos5x}{80}-\frac{\cos3x}{48}-\frac{\cos x}8+ c$$ which is the last alternative form of wolfram alpha (iii) As $\sin3x=3\sin x-4\sin^3x,\cos2x=2\cos^2x-1,$ $$\sin^3x\cos^2x=\frac{(3\sin x-\sin3x)(1+\cos2x)}8=\frac{3\sin x-\sin3x+3\sin x\cos2x-\sin3x\cos2x}8$$ Applying $2\sin A\cos B=\sin(A+B)+\sin(A-B),$ $$\sin^3x\cos^2x=\frac{6\sin x-2\sin3x+3(\sin 3x-\sin x)-(\sin5x+\sin x)}{16}=\frac{\sin3x}{16}-\frac{\sin5x}{16}+\frac{\sin x}8$$ Now we can use $$\int\sin mx dx=-\frac{\cos mx}m+C$$ - How do you get $(2\cos x)^3=2(\cos 3x+3\cos x)$ – yiyi Jan 17 at 4:26 @MaoYiyi, please find the edited answer. – lab bhattacharjee Jan 17 at 5:28 Not very fimilar with Euler's Identity, I see where I made my mistake. Its very helpful for you filling in the extra detail. – yiyi Jan 17 at 6:06 @MaoYiyi,you can have a look into the edited answer(iii) in case you are aware of the trigonometric formulas used. – lab bhattacharjee Jan 17 at 12:52 Since $$\sin^2x\cos^2x=\frac{1}{4}\left(2\sin x\cos x\right)^2=\frac{1}{4}\sin 2x$$ we get $$\int\sin^2x\cos^2x=\frac{1}{4}\int\sin 2x\,dx=-\frac{1}{8}\cos 2x\;\;\;(**)$$ and then, integrating by parts: $$u=\sin x\;\;,\;\;u'=\cos x\\v'=\sin^2x\cos^2 x\;;\,\;\;v=-\frac{1}{8}\cos 2x\,\Longrightarrow$$ $$\int \sin^3x\cos^2 x\,dx=\int\sin x\left(\sin^2x\cos^2 x\right)dx=$$ $$-\frac{1}{8}\cos 2x\sin x+\frac{1}{8}\int\cos x\cos 2x\,dx=-\frac{1}{8}\cos 2x\sin x+\frac{1}{8}\int (1-\sin^2x)\cos x\,dx=$$ $$=-\frac{1}{8}\left(\cos 2x\sin x+\sin x-\frac{1}{3}\sin^3x\right)+C$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9049034714698792, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/24392-solve-y-function-x.html
# Thread: 1. ## Solve for y as a function of x The Eq is supposed to be solved for Y as a function of X : $y^x = x^y + 1$ 2. Originally Posted by xterminal01 The Eq is supposed to be solved for Y as a function of X : $y^x = x^y + 1$ Take the natural log of both sides... $xln|y|=yln|x|$ $\frac{y}{x}=\frac{ln|y|}{ln|x|}$ $\frac{y}{x}=log_xy$ I'm now as lost as you are... 3. Recall $\log(a+b)$ has not property. 4. Set $g(x,y)=y^{x}-x^{y}-1$ $\frac{\partial{g}}{\partial{y}}=xy^{x-1}-x^{y}\ln(x),$ whose value at (1,2) equals 1 ? Is this correct? #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9624068140983582, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/26697/list
## Return to Answer 4 clarification Unfortunately, nonstandard models will survive any such attempt. This is guaranteed by the Löwenheim-Skolem Theorem which says that if a countable first-order theory T has an infinite model then it has one of every infinite cardinality. Since an uncountable model necessarily has nonstandard elements, this guarantees that there is a nonstandard model of T (and even countable ones). Actually, in your case you need a "two-cardinal" version of Löwenheim-Skolem. In your ZFC example, the natural numbers form you move to a theory which interprets arithmetic inside a definable substructure of (the set ω). The definable substructure of such a model which might still be countable even if the model itself is uncountable. Nevertheless, one can still blow up the size of the natural number substructure via the ultrapower construction, for example. To evade the Löwenheim-Skolem Theorem, one has to move beyond first-order logic. For example, in infinitary logic one allows infinite disjunctions such as $$\forall x(x = 0 \lor x = S0 \lor x = SS0 \lor \cdots)$$ which ensures that the model is standard. Also, second-order allows quantification over arbitrary sets under the standard interpretation, which again prohibits non-standard models. (See this related question.) This is the characterization of N most commonly used by working mathematicians. 3 minor addition Unfortunately, nonstandard models will survive any such attempt. This is guaranteed by the Löwenheim-Skolem Theorem which says that if a countable first-order theory T has an infinite model then it has one of every infinite cardinality. Since an uncountable model necessarily has nonstandard elements, this guarantees that there is a nonstandard model of T (and even countable ones). Actually, in your case you need a "two-cardinal" version of Löwenheim-Skolem. In your ZFC example, the natural numbers form a definable substructure of the model which might still be countable even if the model itself is uncountable. Nevertheless, one can still blow up the size of the natural number substructure via the ultrapower construction, for example. To evade the Löwenheim-Skolem Theorem, one has to move beyond first-order logic. For example, in infinitary logic one allows infinite disjunctions such as $$\forall x(x = 0 \lor x = S0 \lor x = SS0 \lor \cdots)$$ which ensures that the model is standard. Also, second-order allows quantification over arbitrary sets under the standard interpretation. This , which again prohibits non-standard models; . (See this related question.) This is the characterization of N most commonly used by working mathematicians. 2 correction Unfortunately, nonstandard models will survive any such attempt. This is guaranteed by the Löwenheim-Skolem Theorem which says that if a countable first-order theory T has an infinite model then it has one of every infinite cardinality. Since an uncountable model necessarily has nonstandard elements, this guarantees that there is a nonstandard model of T (and even countable ones). Actually, in your case you need a "two-cardinal" version of Löwenheim-Skolem. In your ZFC example, the natural numbers form a definable substructure of the model which might still be countable even if the model itself is uncountable. Nevertheless, one can still blow up the size of the natural number substructure via the ultrapower construction, for example. To evade the Löwenheim-Skolem Theorem, one has to move beyond first-order logic. For example, in infinitary logic one allows infinite disjunctions such as $$\forall x(x = 0 \lor x = S0 \lor x = SS0 \lor \cdots)$$ which ensures that the model is standard. Also, second-order allows quantification over arbitrary sets under the standard interpretation. This again prohibits non-standard models; this is the characterization of N most commonly used by working mathematicians. 1 Unfortunately, nonstandard models will survive any such attempt. This is guaranteed by the Löwenheim-Skolem Theorem which says that if a countable first-order theory T has an infinite model then it has one of every infinite cardinality. Since an uncountable model necessarily has nonstandard elements, this guarantees that there is a nonstandard model of T (and even countable ones). To evade the Löwenheim-Skolem Theorem, one has to move beyond first-order logic. For example, in infinitary logic one allows infinite disjunctions such as $$\forall x(x = 0 \lor x = S0 \lor x = SS0 \lor \cdots)$$ which ensures that the model is standard. Also, second-order allows quantification over arbitrary sets under the standard interpretation. This again prohibits non-standard models; this is the characterization of N most commonly used by working mathematicians.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9013199210166931, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/211252-reducing-polynomial.html
# Thread: 1. ## reducing a polynomial hi. i need to find the roots of $2x^3+5$ over the complexs, reals, and rationals. i don't know what to do other than trial and error.. i am starting with the complex. exactly the roots should be easy to find. but i am trying an online calculator and it is telling me the cube root of -5/2 has no real root.. this is not true.. so i am confused. i also don't understand how a polynomial with odd degree can have a complex root.. if someone can help me please give me a pointer for this. then thank you. what i don't understand is that with the odd degree, the imaginary part will not disappear as it is supposed to. 2. ## Re: reducing a polynomial This is a trivial example of a cubic polynomial. The x term appears only once, therefore you can solve directly for x: $2x^3 + 5 = 0$ $2x^3 = -5$ $x^3 = -5/2$ $x = -(5/2)^{\frac{1}{3}}$ i also don't understand how a polynomial with odd degree can have a complex root.. if someone can help me please give me a pointer for this. then thank you. what i don't understand is that with the odd degree, the imaginary part will not disappear as it is supposed to. Good question. The answer is that the numbers will not simply be imaginary: they will be of the form $a + bi$, with form a real and an imaginary component. It is possible for such numbers to be cube roots of a real number. For example, the complex cube roots of 1 are: $-\frac{1}{2} \pm i\frac{\sqrt{3}}{2}$ That is, $\left (-\frac{1}{2} + i\frac{\sqrt{3}}{2}\right )\left (-\frac{1}{2} + i\frac{\sqrt{3}}{2}\right )\left (-\frac{1}{2} + i\frac{\sqrt{3}}{2}\right ) = 1$ In addition, to get the complex cube roots of any real number, including the one in the problem, just multiply these numbers by the real cube root. This is because when you cube the product, you get the product of the cubes, one of which will simply be 1. So the three solutions to your polynomial are: $x = -(5/2)^{\frac{1}{3}}$ $x = -(5/2)^{\frac{1}{3}} \cdot \left ( -\frac{1}{2} + i\frac{\sqrt{3}}{2} \right )$ $x = -(5/2)^{\frac{1}{3}} \cdot \left ( -\frac{1}{2} - i\frac{\sqrt{3}}{2} \right )$ 3. ## Re: reducing a polynomial thank you. but sorry my mistake, i am not just looking to solve it. i wrote this because i had my mind on just the complex solution. i am looking to factor it to prime polynomials. over each of the complex, real and rationals. i wrote the question wrong. but as for that solution you give. yes, i was able to find the real value solution (i am surprised as i say the online calculator which is normally useful said no real roots). $-(5/2)^{1/3}$. i found this. now i understand, to reduce a polynomial over complex numbers, you just multiply (x - the roots), so i would just plug these in. so $2x^3+5=(x + (5/2)^{\frac{1}{3}})(x + (\frac{1}{2} - i\frac{\sqrt{3}}{2}))(x + (\frac{1}{2} + i\frac{\sqrt{3}}{2}))$, does this look right? 4. ## Re: reducing a polynomial No, the last two are not the roots of this polynomial. You would need the entire root. Also, multiply everything by the leading coefficient: $2 \left (x + (5/2 )^{\frac{1}{3}} \right ) \left (x+(5/2)^{\frac{1}{3}} \cdot ( -\frac{1}{2} + i\frac{\sqrt{3}}{2}) \right ) \left (x+(5/2)^{\frac{1}{3}} \cdot ( -\frac{1}{2} - i\frac{\sqrt{3}}{2} ) \right )$ You might also want to simplify the individual roots. 5. ## Re: reducing a polynomial hi, thanks a lot. please may i ask where this is coming from $\left (-\frac{1}{2} + i\frac{\sqrt{3}}{2}\right )\left (-\frac{1}{2} + i\frac{\sqrt{3}}{2}\right )\left (-\frac{1}{2} + i\frac{\sqrt{3}}{2}\right ) = 1$ i didn't know this. and i don't know how to show it myself. i have the solutions ( i can figure the other solutions now too, for reals and rationals), so i have the question solved, thank you very much. but i fear i have not learned it fully, i wish to see where this solution is coming from. if i can derive this result i can do a write up of the problem now thanks to your help. also does $\left (-\frac{1}{2} - i\frac{\sqrt{3}}{2}\right )\left (-\frac{1}{2} - i\frac{\sqrt{3}}{2}\right )\left (-\frac{1}{2} - i\frac{\sqrt{3}}{2}\right ) = 1$ also must be true? because i think if this is true for the conjugate i can understand the problem clearer and i feel i have correctly interpreted what you wrote for me. if not i am back a step.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9619755744934082, "perplexity_flag": "head"}
http://www.conservapedia.com/Tidal_locking
# Tidal lock ### From Conservapedia (Redirected from Tidal locking) Tidal locking is the process by which two bodies in an orbital relationship will, over time, come to show the same face to one another. In other words, the period of rotation about the axis of the body will come to match the body's orbital period. ## Mechanism Considering two bodies, one larger (A) than the other (B): Body A will exert a gravitational force on B, causing B to be bulged towards body A. As body B continues to rotate, a different part of the surface will be pulled towards A. In effect, the bulge will "move". However, the material of body B will resist being warped in this way, resulting in a torque that will affect the rotation of B. This torque acts in such a way that eventually, the rotation of B is such that the same face always points towards A. This process will also alter the rotation of body A, but if A is much larger, then the process will take a greater amount of time. ## Example One example of tidal locking is the Earth/Moon system. The moon has become tidally locked, and so continually shows the same face to inhabitants of planet Earth. The earth's rotation is itself slowing. Given enough time, the earth could lock itself to face the Moon. ## Time scale The time required for tidal lock is difficult to estimate, primarily because it depends on many factors, unique to any given satellite or primary, that are difficult to measure. The formula for the time required for any moon to enter tidal lock with its primary is:[1] $t_{\textrm{lock}} = \frac{16 \rho \omega a^6 Q}{45 G m_p^2 k_2}$ where • $\rho\,$ is the density of the moon • $\omega\,$ is the initial rotation rate in rad s⁻¹ • $a\,$ is the semi-major axis of the moon's orbit. • $Q\,$ is the dissipation function of the moon (not to be confused with its apoapsis). • $G\,$ is the gravitational constant. • $m_p\,$ is the mass of the primary. • $k_2\,$ is the tidal second-order Love number of the moon. Thus the time required is very sensitive to orbital distance and somewhat less sensitive to the mass of the primary. Thus dense moons relatively close to their primaries are more likely to be in tidal lock. ## Problems for uniformitarianism posed by tidal lock Tidal locking is as likely to happen to the primary as to the moon. Indeed Pluto and Charon are mutually locked. But tidal lock has its most profound implications for the Earth-Moon system. Though the presence of tidal locking might appear to militate in favor of a great age for the solar system, the dynamics of tidal lock suggest youth, not age. As earth's rotation decreases, the moon must recede from the earth, or else angular momentum is not conserved (see above). Therefore the rate of deceleration of Earth's rotation must itself decelerate over time. For that reason alone, the Earth-Moon system cannot be more than 1.2 billion years old, because at such a time the Earth would have been rotating dangerously fast, and the Moon would have been touching the Earth. ## References 1. ↑ Gladman B., Quinn D. D., Nicholson P., and Rand R. "Synchronous Locking of Tidally Evolving Satellites." Icarus 122(1):166-192, July 1996. doi:10.1006/icar.1996.0117 Accessed July 4, 2008.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9145244359970093, "perplexity_flag": "middle"}
http://www.cfd-online.com/W/index.php?title=Source_term_linearization&diff=4448&oldid=4404
[Sponsors] Home > Wiki > Source term linearization Source term linearization From CFD-Wiki (Difference between revisions) Tsaad (Talk | contribs) Tsaad (Talk | contribs) m (Source term moved to Source term linearization) Introduction In seeking the solution of the general transport equation for a scalar $\phi$, the main objective is to correctly handle the non-linearities by transforming them into linear form and then iteratively account for the non-linearity. The source term plays a central role in this respect when it is non-linear. For example, in radiation heat transfer, the source term in energy equation is expressed as fourth powers in the temperature. When the source is constant and independent of the conserved scalar, the finite volume method assumes that the value of S prevails of the control volume and thus can be easily integrated. For a given control volume P, we obtain $\int_{\Omega} S d\Omega = S\Omega \,$ Picard's Method Picard's method is the most popular method used in conjunction with the finite volume method. For a given control volume P, we start by writing the source term as $S = S_C + S_PT_P \,$ where $S_C$ denotes the constant part of S and $S_P$ denotes the coefficient of $\phi_P$ (not the value of S at P). This allows us to place $S_P$ in the coefficients for $\phi_P$. Let $\phi_P^*$ denote the value of $\phi_P$at the current itertaion. We now write a Taylor series expansion of S about $\phi_P^*$ as $S = S^* + \left ( \frac {\partial S}{\partial \phi} \right ) ^* \left ( \phi_P - \phi_P^* \right )$ therefore $S_C = S^* - \left ( \frac {\partial S}{\partial \phi} \right ) ^* \phi_P^*$ $S_P = \left( \frac {\partial S}{\partial \phi} \right ) ^*$ where $\left ( \frac {\partial S}{\partial \phi} \right ) ^*$ is the gradient of S evaluated at $\phi_P^*$. As an illustrative example, consider $S = -T^3 + 10 \,$. Following Picard's method, we have $\left( \frac {\partial S}{\partial \phi} \right ) = -3T^2$ $S_C = -T_P^{*3} +10 + 3T_P^{*2}T_P^* = 2T_P^{*3} +10$ $S_P = -3T_P^{*2}$ References 1. Patankar, S.V. (1980), Numerical Heat Transfer and Fluid Flow, ISBN 0070487405, Hemisphere Publishing Corporation, USA.. 2. Murthy, Jayathi Y. (1998), "Numerical Methods in Heat, Mass, and Momentum Transfer", Draft Notes, Purdue University (download). 3. Darwish, Marwan (2003), "CFD Course Notes", Notes, American University of Beirut.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8912510871887207, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/90239-prove-inverse-function-also-linear-fractional-fraction.html
# Thread: 1. ## Prove that inverse of function is also linear fractional fraction? Prove that the inverse of the function $f : \mathbb{R} - \left\{-\frac{d}{c}\right\} \rightarrow \mathbb{R} - \left\{\frac{a}{c}\right\}$, $f(x) = \frac{ax + b}{cx + d}, ad - bc\neq 0$ is also a linear fractional function. Under what condition $f(x)$ coincides with its inverse. 2. The simplest way to do that is to find the inverse!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9150525331497192, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/94244/list
## Return to Answer 2 Acknowledging mistake Apologies that this The Computation Below is not really an answerincorrect, I'm not able to comment yet on but is left up as a learning tool. Please read the sitecomments for details. Let $X_i$ be i.i.d. uniform r.v.'s on $[-1,1]$. If I compute $P_n$ for $n=3$, I don't get your value. Am I doing something wrong in my computation? I get the following: \begin{align} P_3 & = \mathbb{P}(X_1 > 0, \; X_2+X_1 > 0, \; X_3+X_2+X_1 > 0 ) \newline & = \frac{1}{8}\int_{-1}^1 \int_{-1}^1 \int_{-1}^1\mathbf{1}{\{x>0,\;y>-x,\;z>-(x+y)\}} dz\;dy\;dx \newline &= \frac{1}{8}\int_{0}^1 \int_{-x}^1 \int_{-(x+y)}^1 dz\;dy\;dx \newline &= \frac{1}{8}\int_{0}^1 \int_{-x}^1 (1+x+y) dy\;dx \newline & = \frac{1}{8}\int_{0}^1 (\frac{3}{2}+2x+\frac{x^2}{2}) dx \newline & = \frac{1}{3}. \end{align} This disagrees slightly with your formula, which gives $P_3 = \frac{5}{16}$. Let me know if I have misunderstood the problem or made an error, and I will remove this immediately. Otherwise, it seems like a possible answer to your question is to generalize this computation by an inductive argument. 1 Apologies that this is not really an answer, I'm not able to comment yet on the site. Let $X_i$ be i.i.d. uniform r.v.'s on $[-1,1]$. If I compute $P_n$ for $n=3$, I don't get your value. Am I doing something wrong in my computation? I get the following: \begin{align} P_3 & = \mathbb{P}(X_1 > 0, \; X_2+X_1 > 0, \; X_3+X_2+X_1 > 0 ) \newline & = \frac{1}{8}\int_{-1}^1 \int_{-1}^1 \int_{-1}^1\mathbf{1}{\{x>0,\;y>-x,\;z>-(x+y)\}} dz\;dy\;dx \newline &= \frac{1}{8}\int_{0}^1 \int_{-x}^1 \int_{-(x+y)}^1 dz\;dy\;dx \newline &= \frac{1}{8}\int_{0}^1 \int_{-x}^1 (1+x+y) dy\;dx \newline & = \frac{1}{8}\int_{0}^1 (\frac{3}{2}+2x+\frac{x^2}{2}) dx \newline & = \frac{1}{3}. \end{align} This disagrees slightly with your formula, which gives $P_3 = \frac{5}{16}$. Let me know if I have misunderstood the problem or made an error, and I will remove this immediately. Otherwise, it seems like a possible answer to your question is to generalize this computation by an inductive argument.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7933683395385742, "perplexity_flag": "middle"}
http://ams.org/bookstore?fn=20&arg1=mmonoseries&ikey=MMONO-192
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Return to List $$C^*$$-Algebras and Elliptic Operators in Differential Topology Yu. P. Solovyov and E. V. Troitsky, Moscow State University, Russia SEARCH THIS BOOK: Translations of Mathematical Monographs 2001; 213 pp; hardcover Volume: 192 ISBN-10: 0-8218-1399-4 ISBN-13: 978-0-8218-1399-7 List Price: US\$91 Member Price: US\$72.80 Order Code: MMONO/192 The aim of this book is to present some applications of functional analysis and the theory of differential operators to the investigation of topological invariants of manifolds. The main topological application discussed in the book concerns the problem of the description of homotopy-invariant rational Pontryagin numbers of non-simply connected manifolds and the Novikov conjecture of homotopy invariance of higher signatures. The definition of higher signatures and the formulation of the Novikov conjecture are given in Chapter 3. In this chapter, the authors also give an overview of different approaches to the proof of the Novikov conjecture. First, there is the Mishchenko symmetric signature and the generalized Hirzebruch formulae and the Mishchenko theorem of homotopy invariance of higher signatures for manifolds whose fundamental groups have a classifying space, being a complete Riemannian non-positive curvature manifold. Then the authors present Solovyov's proof of the Novikov conjecture for manifolds with fundamental group isomorphic to a discrete subgroup of a linear algebraic group over a local field, based on the notion of the Bruhat-Tits building. Finally, the authors discuss the approach due to Kasparov based on the operator $$KK$$-theory and another proof of the Mishchenko theorem. In Chapter 4, they outline the approach to the Novikov conjecture due to Connes and Moscovici involving cyclic homology. That allows one to prove the conjecture in the case when the fundamental group is a (Gromov) hyperbolic group. The text provides a concise exposition of some topics from functional analysis (for instance, $$C^*$$-Hilbert modules, $$K$$-theory or $$C^*$$-bundles, Hermitian $$K$$-theory, Fredholm representations, $$KK$$-theory, and functional integration) from the theory of differential operators (pseudodifferential calculus and Sobolev chains over $$C^*$$-algebras), and from differential topology (characteristic classes). The book explains basic ideas of the subject and can serve as a course text for an introduction to the study of original works and special monographs. Readership Graduate students and research mathematicians interested in differential topology, functional analysis, and geometry; theoretical physicists. • $$C^*$$-algebras and $$K$$-theory
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8267289996147156, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/232193/rotation-on-a-sphere-and-change-in-coordinates
# Rotation on a sphere and change in coordinates Given a point P with coordinates $(P_x,P_y,P_z)$ on the sphere: $$(x-a)^2 +(y-b)^2 +(z-c)^2 = R^2$$ and a line with equation : $$\frac{x-x_1}{x_2-x_1}=\frac{y-y_1}{y_2-y_1}=\frac{z-z_1}{z_2-z_1}.$$ where $(x_1,y_1,z_1)$ and $(x_2, y_2, z_2)$ are the two points determining the line. How do I obtain the new coordinates of the point P after rotation about the line on angle $\theta$ such that the point stays on the sphere? - 1 presumably you want $x_1 = a$, $y_1 = b$, and $z_1 = c$ so that the line actually goes through the center of the sphere (and so is a symmetry axis)? – Willie Wong♦ Nov 7 '12 at 15:14 No, the line is arbitrary. The point rotates around the line in such a way that it always stays on the sphere. – Adam Nov 7 '12 at 19:38 Then I am afraid I have no idea what you mean by "rotates around the line in such a way that it always stays on the sphere". If you are not restricting it using any sort of symmetry, there are uncountably many circle actions on the sphere. Hence there isn't just "the" new coordinates of the point $P$; point $P$ can be practically anywhere on the sphere. – Willie Wong♦ Nov 8 '12 at 8:46 Dear Willie, you are right. Could you tell me the new coordinates in the case you mentioned - when the line is passing through the center of the sphere. – Adam Nov 8 '12 at 13:36 ## 1 Answer [Edit: the following answer addresses the case of the line through the center of the sphere.] Note: there is an ambiguity due to the fact that "angle $\theta$" does not specify which way the rotation is going (you can call the North Pole the South Pole, and suddenly the Earth rotate in the other way!) What I will assume is that you are given the center $(a,b,c)$ of the sphere and a vector $(\alpha,\beta,\gamma) \neq (0,0,0)$ so that $x_2 = a + \alpha$, $y_2 = b + \beta$, and $z_2 = c + \gamma$. We shall assume the rotation is right handed relative to the vector $(\alpha,\beta,\gamma)$, that is, if you look along the direction given by $(\alpha,\beta,\gamma)$, the rotation is clockwise. Note that for any $\lambda \neq 0$, $(\alpha,\beta,\gamma)$ and $(\lambda \alpha,\lambda\beta,\lambda\gamma)$ determine the same line. But if $\lambda < 0$ their rotational directions are opposite. For convenience we will require that the vector $(\alpha,\beta,\gamma)$ is a unit vector, that is $\alpha^2 + \beta^2 + \gamma^2 = 1$. We can always get this by dividing the vector by its length. Then we can use this formula here plus a translation: a point $(x,y,z)$ is sent to $$\begin{pmatrix} x \\ y \\ z\end{pmatrix} \mapsto \begin{pmatrix} a \\ b \\ c\end{pmatrix} + \begin{pmatrix} \cos \theta + (1 - \cos \theta)\alpha^2 & \alpha\beta(1-\cos\theta) - \gamma \sin\theta & \alpha\gamma(1-\cos\theta) + \beta \sin\theta \\ \alpha \beta(1-\cos\theta) + \gamma \sin\theta & \cos\theta + (1-\cos\theta)\beta^2 & \beta\gamma(1-\cos\theta) - \alpha\sin\theta\\ \alpha\gamma(1-\cos\theta) - \beta \sin\theta & \beta\gamma(1-\cos\theta) + \alpha\sin\theta & \cos\theta + (1-\cos\theta) \gamma^2 \end{pmatrix} \begin{pmatrix} x - a \\ y - b \\ z - c\end{pmatrix}$$ - Thank you. But when we rotate the point about a line through the center, the point always stays on the sphere. Going back to my initial question when the line is arbitrary.If we rotate the point with some angle, then it will leave the sphere. But the line connecting the image of the point with the center of rotation will meet the sphere at some point $P'$. This point is the image of our initial point on the sphere after the rotation. How can I get the coordinates of it? – Adam Nov 9 '12 at 0:35 @Adam: that detail is something that you should've mentioned in your original post after my second comment. `:-)`. Can you edit it into your question so it will be clearer to the readers? – Willie Wong♦ Nov 9 '12 at 8:33 @Adam: hang on: the statement "But the line connecting the image of the point with the center of rotation will meet the sphere at some point $P'$" is not true. Let the sphere be the unit sphere centered around $(2,0,0)$. Let the line of rotation to be the $z$ axis. Let the angle be $\pi/2$. If the initial point is $P = (3,0,0)$ it gets rotated to $P' = (0,3,0)$. Any line through $P'$ and the $z$ axis will lie in the $y$-$z$ plane and will not intersect the sphere. Do you mean instead "... the line connecting the image with the center of the sphere..."? – Willie Wong♦ Nov 9 '12 at 8:43 Dear Willie, no I didn't mean that. I got confused. I have one more question. If we consider the same problem in spheric coordinates ($P(r,\phi, \psi)$ what will be the transformation that gives the new coordinates $P_1(r, \phi_1, \psi_1)$? – Adam Nov 12 '12 at 11:53 @Adam: through an arbitrary center point of spherical coordinates that is not tied to the given sphere or the line? The answer would be messy. You are probably better off converting to rectangular coordinates and then converting back after you do the transformation. – Willie Wong♦ Nov 12 '12 at 13:17 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234132766723633, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/18690/algebraic-proof-that-sum-limits-i-0n-binomni-2n
# Algebraic Proof that $\sum\limits_{i=0}^n \binom{n}{i}=2^n$ I'm well aware of the combinatorial variant of the proof, i.e. noting that each formula is a different representation for the number of subsets of a set of $n$ elements. I'm curious if there's a series of algebraic manipulations that can lead from $\sum\limits_{i=0}^n \binom{n}{i}$ to $2^n$. - 10 is the proof you looking for using $(1+1)^n=2^n$? – Arjang Jan 24 '11 at 4:30 Well, no. That one I was also aware of. It's more of a curiosity if there's any direct method to go from the summation to $2^n$. – JSchlather Jan 24 '11 at 4:40 5 One should not think of the algebraic and combinatorial proofs as different. There is a straightforward dictionary between algebra and combinatorics in these cases (and it is given by taking generating functions). – Qiaochu Yuan Jan 24 '11 at 9:12 Zeilberger's algorithm might do it - it's a useful tool for this kind of problem in general (sum from $-\infty$ to $\infty$ of a hypergeometric with finite support). – Peter Taylor Jan 24 '11 at 9:29 @Peter Taylor: Zeilberger's algorithm produces the recurrence given in my answer. See Section 5.8 of Concrete Mathematics. – Mike Spivey Jan 24 '11 at 14:15 show 4 more comments ## 3 Answers Here's one. Let $g(n) = \sum \limits_{i=0}^n \binom{n}{i}$. Then $$g(n+1) - g(n) = \sum_{i=0}^{n+1} \binom{n+1}{i} - \sum_{i=0}^n \binom{n}{i} = \sum_{i=0}^{n+1} \left(\binom{n+1}{i} - \binom{n}{i}\right) = \sum_{i=0}^{n+1} \binom{n}{i-1}$$ $$= \sum_{i=0}^n \binom{n}{i} = g(n).$$ Here, we use the fact that $\binom{n}{n+1} = \binom{n}{-1} = 0$, as well as the binomial recurrence $\binom{n+1}{i} = \binom{n}{i} + \binom{n}{i-1}$. Thus we have $g(n+1) = 2g(n)$, with $g(0) = 1$. Since $g(n)$ doubles each time $n$ is incremented by 1, we must have $$g(n) = \sum_{i=0}^n \binom{n}{i} = 2^n.$$ - 3 This is more or less the same proof one would do to show that $(a+b)^n$ is what it is... – Mariano Suárez-Alvarez♦ Jan 24 '11 at 5:40 – Mike Spivey Jan 24 '11 at 5:57 Very nice and this proof seems to be analogous to what picakhu did as well. – JSchlather Jan 24 '11 at 5:57 Simply use the binomial formula. $$(a + b)^n = \sum_{k=0}^n {n \choose k} a^k b^{n - k}$$ With $a = b = 1$ you have your result. - Well, here is one. $$\sum_{i=0}^n \binom{n}{i}=2^n$$ $$\sum_{i=0}^n \binom{n}{i}+\sum_{i=0}^n \binom{n}{i}=2^{n+1}$$ $$\binom{n}{0}+\left [ \binom{n}{0}+\binom{n}{1} \right ]+...+\left [ \binom{n}{n-1}+\binom{n}{n}\right ]+\binom{n}{n}=2^{n+1}$$ $$\sum_{i=0}^{n+1} \binom{n+1}{i}=2^{n+1}$$ - So you're using induction? And I assume that in your last step it should be $\sum_{i=0}^{n+1}\binom{n+1}{i}$? – JSchlather Jan 24 '11 at 4:41 Yup, sorry about that. – picakhu Jan 24 '11 at 4:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.925255537033081, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/115748/limit-preserving-inclusion-of-a-subcategory?answertab=votes
# Limit preserving inclusion of a subcategory Is every inclusion functor of a full subcategory limit preserving? - ## 1 Answer No. Nor is it colimit preserving. What is true is that the inclusion reflects all limits and colimits. That is, if $\mathcal{D}$ is a full subcategory of $\mathcal{C}$ with inclusion $i : \mathcal{D} \hookrightarrow \mathcal{C}$, and $A : \mathcal{J} \to \mathcal{D}$ is a diagram, if there is an object $L$ in $\mathcal{D}$ such that $i L = \varprojlim i A$, then $L = \varprojlim A$. - cancelled............ – Marc Olschok Mar 14 '12 at 16:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8752037882804871, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/88930-projective-module-homomorphism.html
# Thread: 1. ## projective, module homomorphism A module P is projective if for every surjective module homomorphism $f : N \twoheadrightarrow M$ and every module homomorphism $g : P \rightarrow M$, there exists a homomorphism $h : P \rightarrow N$ such that $f \circ h = g$. Prove that if $P$ is projective and $h: P \rightarrow M$ is a surjective $R-$module homomorphism, then there exists some $R-$module homomorphism $s: M \rightarrow P$ such that $h \circ s=1_M$. This is a section map. I don't know how to prove this. I was thinking to let $N=M$, but it doesn't seem that easy. 2. Originally Posted by zelda2139 Prove that if $P$ is projective and $h: P \rightarrow M$ is a surjective $R-$module homomorphism, then there exists some $R-$module homomorphism $s: M \rightarrow P$ such that $h \circ s=1_M$. the question, as you wrote it, is wrong! (check what you wrote carefully!) the correct one is this: Prove that if $P$ is projective and $h: M \rightarrow P$ is a surjective $R-$module homomorphism, then there exists some $R-$module homomorphism $s: P \rightarrow M$ such that $h \circ s=1_P$. the proof is trivial: we have the identity map $1_p: P \longrightarrow P$ and a surjection $h: P \longrightarrow M.$ thus by the definition of projectivity there exists a map $s: P \longrightarrow M$ such that $hs=1_P.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354789853096008, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3801511
Physics Forums ## High school Normal distribution two questions TITLE: Normal distributions (sorry) Question 1: Working: Now the answer for sigma i got correct, but μ i got incorrect for some reason. Could anyone explain where i've gone wrong for μ? Question 2: Workings: All i need help is with part c and d: for c) i done 1 - 0.6463 to get the probability of the two shaded bits, but where do i go to get the part they are asking for? for d) i have no idea, any help would be great. thanks. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member Science Advisor Staff Emeritus Well, one obvious error is that you have both $(25-\mu)/\sigma$ and $(64-\mu)/\sigma$ equal to -0.67. One of them should be positive. In any case, the normal distribution is symmertric so it should be clear that the mean will be exactly between the two "quartiles". The two shaded bits? I assume you mean the one shaded bit and the other outlieing area. That, of course, is 1- .6463. To find the two parts separately, use the fact, as you are given, that upper piece has twice the area of the lower piece. Quote by HallsofIvy Well, one obvious error is that you have both $(25-\mu)/\sigma$ and $(64-\mu)/\sigma$ equal to -0.67. One of them should be positive. In any case, the normal distribution is symmertric so it should be clear that the mean will be exactly between the two "quartiles". The two shaded bits? I assume you mean the one shaded bit and the other outlieing area. That, of course, is 1- .6463. To find the two parts separately, use the fact, as you are given, that upper piece has twice the area of the lower piece. for question 1) Why is one of the positive (the second one) if they are symmetrical. Also how can i use use symmetry to find the mean? Is it 25+45 / 2? for question 2) I still dont get how to get part C. Recognitions: Homework Help ## High school Normal distribution two questions Quote by synkk for question 1) Why is one of the positive (the second one) if they are symmetrical. Also how can i use use symmetry to find the mean? Is it 25+45 / 2? for question 2) I still dont get how to get part C. Is it 25+45 / 2? Close, but not quite: it IS (25 + 45)/2---what you wrote is 25 + 22.5. Brackets matter! RGV Quote by Ray Vickson Is it 25+45 / 2? Close, but not quite: it IS (25 + 45)/2---what you wrote is 25 + 22.5. Brackets matter! RGV Could you explain why it is (25 + 45) / 2? I don't understand how the symmetry makes this true. Recognitions: Homework Help Quote by synkk Could you explain why it is (25 + 45) / 2? I don't understand how the symmetry makes this true. The distance between the mean and the 25th percentile is the same as the distance between the mean and the 75th percentile, because of symmetry. Therefore, the mean is the average of the 25th and 75th percentiles. RGV Quote by Ray Vickson The distance between the mean and the 25th percentile is the same as the distance between the mean and the 75th percentile, because of symmetry. Therefore, the mean is the average of the 25th and 75th percentiles. RGV Alright thank you. Do you have any idea on how to do question 2 part c and d? Recognitions: Homework Help Quote by synkk Alright thank you. Do you have any idea on how to do question 2 part c and d? Yes, but I think you have already been given enough hints by others. Just look at the diagram, sit down, relax, and _think_. Don't try to write down the answer right away: approach it systematically, piece-by-piece, and don't worry if it takes longer than you think it should. After you see what is happening you can go back and clean it up. RGV Quote by Ray Vickson Yes, but I think you have already been given enough hints by others. Just look at the diagram, sit down, relax, and _think_. Don't try to write down the answer right away: approach it systematically, piece-by-piece, and don't worry if it takes longer than you think it should. After you see what is happening you can go back and clean it up. RGV still on question 1) i dont understand why (45 - mu) / sigma is positive 0.67, could you explain please? for c i done (1-0.6463)/3 as there is 3 parts, with the shaded region being the smallest, which gets me the correct value. For D I've standardized it but i'm not sure what value i should put it against... Recognitions: Homework Help Quote by synkk still on question 1) i dont understand why (45 - mu) / sigma is positive 0.67, could you explain please? for c i done (1-0.6463)/3 as there is 3 parts, with the shaded region being the smallest, which gets me the correct value. For D I've standardized it but i'm not sure what value i should put it against... Never mind trying to understand whether (45 - mu) / sigma is 0.67, just tell me: what do YOU think the value of (45 - mu)/sigma should be? Remember, read the whole question carefully before answering. RGV Quote by Ray Vickson Never mind trying to understand whether (45 - mu) / sigma is 0.67, just tell me: what do YOU think the value of (45 - mu)/sigma should be? Remember, read the whole question carefully before answering. RGV I think it should equal -0.67 because P(z>(45-mu)/sigma) = 0.75 meaning it is in the left hand side of the distribution, so we reflect it and we are trying to find P(z<(45-mu)/sigma) which is 0.67, but because we reflected it doesnt it mean it is negative i.e that originally z was in the left hand side so negative. thanks for continuing to help. Recognitions: Homework Help Quote by synkk I think it should equal -0.67 because P(z>(45-mu)/sigma) = 0.75 meaning it is in the left hand side of the distribution, so we reflect it and we are trying to find P(z<(45-mu)/sigma) which is 0.67, but because we reflected it doesnt it mean it is negative i.e that originally z was in the left hand side so negative. thanks for continuing to help. You should avoid just blindly using formulas you do not really understand. Instead: think! If 45 is the 75th percentile, and μ is the 50th percentile (also equal to the mean), we MUST HAVE 45 > μ, so (45 - μ)/σ must be > 0. It cannot possibly be -0.67, which is a negative number. Your "equation" P(z>(45-mu)/sigma) = 0.75 is wrong, but would be OK if you replaced the 0.75 by ___ ? (I'm leaving it to you to fill in the blank.) Always, always, draw a picture. RGV by 0.25? so P(Z>(45-mu)/sigma) = 0.25 ? Is this correct? Recognitions: Homework Help Quote by synkk by 0.25? so P(Z>(45-mu)/sigma) = 0.25 ? Is this correct? Yes. RGV Quote by Ray Vickson Yes. RGV Thank you for all the help, i done question 2 part d also. Thread Tools | | | | |--------------------------------------------------------------------|--------------------------------------------|---------| | Similar Threads for: High school Normal distribution two questions | | | | Thread | Forum | Replies | | | Set Theory, Logic, Probability, Statistics | 12 | | | Linear & Abstract Algebra | 22 | | | Academic Guidance | 6 | | | Set Theory, Logic, Probability, Statistics | 3 | | | Academic Guidance | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389928579330444, "perplexity_flag": "middle"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/S/s13acc.html
# NAG Library Function Documentnag_cos_integral (s13acc) ## 1  Purpose nag_cos_integral (s13acc) returns the value of the cosine integral $\mathrm{Ci}\left(x\right)$. ## 2  Specification #include <nag.h> #include <nags.h> double nag_cos_integral (double x, NagError *fail) ## 3  Description nag_cos_integral (s13acc) evaluates $Cix = γ + ln⁡x + ∫ 0 x cos⁡u - 1 u du x > 0$ where $\gamma $ denotes Euler's constant. The approximation is based on several Chebyshev expansions. ## 4  References Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications ## 5  Arguments 1:     x – doubleInput On entry: the argument $x$ of the function. Constraint: ${\mathbf{x}}>0.0$. 2:     fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_REAL_ARG_LE On entry, x must not be less than or equal to 0.0: ${\mathbf{x}}=〈\mathit{\text{value}}〉$. The function is not defined for this value and the result returned is zero. ## 7  Accuracy If $E$ and $\epsilon $ are the absolute and relative errors in the result and $\delta $ is the relative error in the argument then in principle these are related by $\left|E\right|\simeq \left|\delta \mathrm{cos}x\right|$ and $\left|\epsilon \right|\simeq \left|\left(\delta \mathrm{cos}x\right)/\mathrm{Ci}\left(x\right)\right|$. That is, accuracy will be limited by machine precision near the origin and near the zeros of $\mathrm{cos}x$, but near the zeros of $\mathrm{Ci}\left(x\right)$ only absolute accuracy can be maintained. For large values of $x$, $\mathrm{Ci}\left(x\right)\sim \left(\mathrm{sin}x\right)/x$ therefore $\sim \delta x\mathrm{cot}x$ and since $\delta $ is limited by the finite precision of the machine it becomes impossible to return results which have any relative accuracy. That is, when $x\ge 1/\delta $ we have that $\left|\mathrm{Ci}\left(x\right)\right|\le 1/x\sim E$ and hence is not significantly different from zero. Hence, for $x>{x}_{\mathrm{hi}}$, where ${x}_{\mathrm{hi}}$ is a machine-dependent value, $\mathrm{Ci}\left(x\right)$ in principle has values less than machine precision, and so is set directly to zero. None. ## 9  Example The following program reads values of the argument $x$ from a file, evaluates the function at each value of $x$ and prints the results. ### 9.1  Program Text Program Text (s13acce.c) ### 9.2  Program Data Program Data (s13acce.d) ### 9.3  Program Results Program Results (s13acce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6174866557121277, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/21748/does-the-formula-sqrt-1-24n-always-yield-prime?answertab=votes
# Does the formula $\sqrt{ 1 + 24n }$ always yield prime? I did some experiments, using C++, investigating the values of $\sqrt{1+24n}$. ```` n: 1 -> 5 n: 2 -> 7 n: 5 -> 11 n: 7 -> 13 n: 12 -> 17 n: 15 -> 19 n: 22 -> 23 n: 35 -> 29 n: 40 -> 31 n: 57 -> 37 n: 70 -> 41 n: 77 -> 43 n: 92 -> 47 ```` I wonder, if $$\sqrt{1+24n}$$ is an integer, will it also be a prime? Thanks, Chan - @Arturo Magidin: Thanks for the editing. I was trying to get rid of the \n ^_^! – Chan Feb 12 '11 at 23:10 1 Do you mean, does it give a prime whenever it gives an integer? (Obviously, it doesn't always give a prime because it doesn't always give an integer). – Arturo Magidin Feb 12 '11 at 23:11 @Arturo Magidin: Yes, that what I tried to say. – Chan Feb 12 '11 at 23:14 3 @Chan: I'm not criticizing here, just curious: why did you skip $n = 26$ in your experiments? – Pete L. Clark Feb 13 '11 at 20:12 1 But many of your $n$ are not primes, so why did you skip 26? – Ross Millikan Mar 5 '11 at 5:48 show 1 more comment ## 6 Answers How about $n=26$? In general, take a composite number of the form $12k+1$ and take $n = k + 6k^2$ to arrive at a contradiction for your statement. For instance, $k=2 \Rightarrow n=26 \Rightarrow \sqrt{1+24n} = 25$ $k=4 \Rightarrow n=100 \Rightarrow \sqrt{1+24n} = 49$ $k=7 \Rightarrow n=301 \Rightarrow \sqrt{1+24n} = 85$ and so on. There are infinite composite numbers of the form $(12k+1)$ which gives infinite counterexamples to your claim. Your observation though is a nice one, since $24 | (p^2-1)$, $\forall \text{ primes } p > 3$. So you will find that all the primes $>3$ can be written as $\sqrt{1+24n}$. - @Sivaram Ambikasaran: so what form could yield prime? – Chan Feb 12 '11 at 23:12 @Chan: What form of what? – user17762 Feb 12 '11 at 23:18 1 @Chan: If you want what form of $n$ will yield you prime, that might be a difficult question to answer. You can of course find what form of $n$ will yield an integer. – user17762 Feb 12 '11 at 23:20 1 @Chan: Given that every prime number is spanned by the sequence, to know what values of $n$ yield a prime is equivalent to knowing the pattern of primes. – user17762 Feb 12 '11 at 23:28 2 Sorry, I made a common mistake based on a proof that number of primes is infinite. – InterestedGuest Feb 13 '11 at 0:25 show 5 more comments HINT $\rm\: \mod\ 24\::\ \ x^2 \equiv 1\ \Rightarrow\ (5x)^2 \equiv 1\:,\$ but $\rm\:5\:x\:$ is prime iff $\rm\: x= \pm1$ Note that this yields a general structural reason explaining why such integers can't all be primes. Namely, the integers you describe are simply those integers that, when reduced modulo $24\:,$ yield square roots of $1\:.\:$ But such roots are closed under multiplication: $\rm\ x^2\equiv 1,\ y^2\equiv 1\ \Rightarrow\ (xy)^2\equiv 1\:.\:$ But primes are not closed under multiplication. For example, one can take any of your prime solutions and multiply them to obtain a composite solution, e.g. $\rm\ 5^2 = 25,\: \ 5\cdot 7 = 35\:,\:$ etc. Notice that there are precisely $8\:$ square-roots of $\rm 1\ (mod\ 24)\$ viz. $\rm \pm 1,\:\pm 5,\:\pm 7,\: \pm 11\:,\:$ corresponding (by $\rm CRT$) to the product of the two roots $\rm\ \pm 1\ (mod\ 3)\$ times the four roots $\rm\ \pm 1,\: \pm 3\ (mod\ 8)\:.\:$ Note that these are precisely the congruence classes of all the integers coprime to $\:3\:$ and $\rm\:2\:,\:$ which includes all primes $> 3$. This explains your empirical observations above. The key observation, that $\rm\ x^2\equiv 1\ (mod\ 24)\ \iff\ x\:$ is coprime to $\:6\:,\:$ is nothing but a very special case computation of Carmichael's generalization of Euler's phi-function - see my post here for details. - $\sqrt{1+24\cdot 26} = \sqrt{625} = 25$! $$\sqrt{1+24\cdot n} = x$$ $${1+24\cdot n} = x^2$$ $$n = \dfrac{x^2 -1}{24}$$ So if $x=25$, $\dfrac{x^2 -1}{24}$ is an integer. - 12 I read the 25! as 25 factorial initially. I was quite confused ;-). – Jason DeVito Feb 13 '11 at 3:15 Nope. $\sqrt{1+24*381276} = 3025 = 605 * 5$ There are many such formulars which seem to yield only primes, but most of them aren't. - I am not sure what you mean. There are many examples, including infinite series, and polynomials in several variables all of whose positive values are prime numbers. – Andres Caicedo Feb 12 '11 at 23:17 Changed. I thought it's very difficult or impossible to find a prime generating function which only yields primes. Can you give me an example? – FUZxxl Feb 12 '11 at 23:21 1 – user17762 Feb 12 '11 at 23:33 Uhhh... That's difficult. Thanks for this link. – FUZxxl Feb 13 '11 at 11:43 (p-1)(p+1) must be divisible by 2 times 4 if p is an odd integer (since p-1 and p+1 are then "consequtive even numbers" so both are divisible by 2, and one of them is even divisible by 4). If p is not divisible by 3, then one of the numbers p-1 or p+1 must be divisible by 3. Thus for any odd integer p which is not divisible by 3, the product (p-1)(p+1) must be divisible by 2*4*3=24. So for ANY odd integer p not divisible by 3 there exists some integer n (depending on p) such that p^2-1=24 n. So... but you can fill in the blanks now, n'est-ce pas? - take n=$24k^2$+$2k$ , $\Rightarrow \sqrt{1+24n}=24k+1$ 24k+1, is composite infinitely, to give one such case.. if $k=24^{2r}$ r=0,1,2,3... then 25 divides $24k+1$ always it follows, for $n$=$24^{4r+1}$+$(2.24^{2r})$ , r=0,1,2... the value $\sqrt{1+24n}$ is divisible by 25, and hence definitely not prime. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9112634062767029, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2012/08/10/derivations/?like=1&_wpnonce=7a8d289a6e
# The Unapologetic Mathematician ## Derivations When first defining (or, rather, recalling the definition of) Lie algebras I mentioned that the bracket makes each element of a Lie algebra $L$ act by derivations on $L$ itself. We can actually say a bit more about this. First off, we need an algebra $A$ over a field $\mathbb{F}$. This doesn’t have to be associative, as our algebras commonly are; all we need is a bilinear map $A\otimes A\to A$. In particular, Lie algebras count. Now, a derivation $\delta$ of $A$ is firstly a linear map from $A$ back to itself. That is, $\delta\in\mathrm{End}_\mathbb{F}(A)$, where this is the algebra of endomorphisms of $A$ as a vector space over $\mathbb{F}$, not the endomorphisms as an algebra. Instead of preserving the multiplication, we impose the condition that $\delta$ behave like the product rule: $\displaystyle\delta(ab)=\delta(a)b+a\delta(b)$ It’s easy to see that the collection $\mathrm{Der}(A)\subseteq\mathrm{End}_\mathbb{F}(A)$ is a vector subspace, but I say that it’s actually a Lie subalgebra, when we equip the space of endomorphisms with the usual commutator bracket. That is, if $\delta$ and $\partial$ are two derivations, I say that their commutator is again a derivation. This, we can check: $\displaystyle\begin{aligned} [\delta,\partial](ab)=&\delta(\partial(ab))-\partial(\delta(ab))\\=&\delta(\partial(a)b+a\partial(b)))-\partial(\delta(a)b+a\delta(b)))\\=&\delta(\partial(a)b)+\delta(a\partial(b)))-\partial(\delta(a)b)-\partial(a\delta(b)))\\=&\delta(\partial(a))b+\partial(a)\delta(b)+\delta(a)\partial(b)+a\delta(\partial(b))\\&-\partial(\delta(a))b-\delta(a)\partial(b)-\partial(a)\delta(b)-a\partial(\delta(b))\\=&[\delta,\partial](a)b+a[\delta,\partial](b)\end{aligned}$ We’ve actually seen this before. We identified the vectors at a point $p$ on a manifold with the derivations of the (real) algebra of functions defined in a neighborhood of $p$, so we need to take the commutator of two derivations to be sure of getting a new derivation back. So now we can say that the mapping that sends $x\in L$ to the endomorphism $y\mapsto[x,y]$ lands in $\mathrm{Der}(L)$ because of the Jacobi identity. We call this mapping $\mathrm{ad}:L\to\mathrm{Der}(L)$ the “adjoint representation” of $L$, and indeed it’s actually a homomorphism of Lie algebras. That is, $\mathrm{ad}([x,y])=[\mathrm{ad}(x),\mathrm{ad}(y)]$. The endomorphism on the left-hand side sends $z\in L$ to $[[x,y],z]$, while on the right-hand side we get $[x,[y,z]]-[y,[x,z]]$. That these two are equal is yet another application of the Jacobi identity. One last piece of nomenclature: derivations in the image of $\mathrm{ad}:L\to\mathrm{Der}(L)$ are called “inner”; all others are called “outer” derivations. ### Like this: Posted by John Armstrong | Algebra, Lie Algebras ## 4 Comments » 1. [...] a group — which is the collection of all such that for all . That is, those for which the adjoint action is the zero derivation — the kernel of — which is clearly an [...] Pingback by | August 13, 2012 | Reply 2. [...] that . In fact, while this case is very useful, all we need from is that it’s a nilpotent derivation of . The product rule for derivations generalizes [...] Pingback by | August 18, 2012 | Reply 3. [...] -algebra — associative, Lie, whatever — and remember that contains the Lie algebra of derivations . I say that if then so are its semisimple part and its nilpotent part ; it’s enough to [...] Pingback by | August 30, 2012 | Reply 4. [...] turns out that all the derivations on a semisimple Lie algebra are inner derivations. That is, they’re all of the form for [...] Pingback by | September 11, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273613095283508, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/19261/conservation-of-momentum-energy-collision-problem/19265
# Conservation of Momentum/Energy collision Problem I'm working on a physics problem in preparation for the MCAT and there's this particular problem that's troubling me. I don't know if it's a bad question or if I'm not understanding some sort of concept. I was hoping someone here can clarify. Here's the problem verbatim: A 1kg cart travels down an inclined plane at 5 m/s and strikes two billiard balls, which start moving in opposite directions perpendicular to the initial direction of the cart. Ball A has a mass of 2kg and moves away at 2m/s and ball B has a mass of 1kg and moves away at 4m/s. Which of the following statements is true? a) the cart will come to a halt in order to conserve momentum b) the cart will slow down c) the cart will continue moving as before, while balls A and B will convert the gravitational potential energy of the cart into their own kinetic energies d) these conditions are impossible because they violate either the law of conservation of momentum or conservation of energy At first glance, it appears to me that the answer is (D) because the system seemingly has more total momentum after the collision than before the collision. However, the answer explanation insists the correct answer to be (C) since it claims that "kinetic energy is not conserved; the system gains energy in this inelastic collision". I can understand that this gain in energy can come from gravitational potential energy from the incline the cart is on; however, it is ambiguous if the cart is accelerating down the incline. In order for the scenario to be consistent with choice (C), does the cart have to be accelerating down the incline? Or do you take the problem to mean that the cart is leaving the incline at 5m/s? Or am I missing or not understanding something? How would you interpret this problem and which explanation do you think is most consistent with the scenario? What assumptions do you have to make to arrive at your answer? The answer key's explanation, verbatim is as follows: The law of conservation of momentum states that both the vertical and horizontal components of momentum for a system must stay constant. If you take the initial movement of the cart as horizontal and the two balls move in perpendicular directions to the horizontal, it means that the cart must maintain its horizontal component of velocity. Therefore, (A) and (B) are wrong. If the billiard balls move as described, then kinetic energy is not conserved; the system gains energy in this inelastic collision. (C) correctly describes how this scenario is possible. - Hi jpan, and welcome to Physics Stack Exchange! Good question :-) I'm adding the "homework" tag because that tag applies to all problems of an educational nature, not just actual homework assignments. – David Zaslavsky♦ Jan 8 '12 at 2:38 1 I think the problem as worded is a bit vague. Since you are told the balls travel in a direction perpendicular to the motion of the cart, momentum conservation rules out a and b. I need to look at this in the morning when I'm not tired. – WWright Jan 8 '12 at 3:01 ## 2 Answers It's a bad question. For one thing, answer (C) is utter nonsense. (Maybe that's a bit harsh. It might be just regular nonsense.) In order for something to convert gravitational potential energy into kinetic energy, it has to drop to a lower height under the influence of gravity. This does not happen during a collision. Collisions in physics are effectively instantaneous events; they occur at one point in space and time and then they're over and done with. There is no change in height by which GPE could be converted into KE during the collision. Whatever (kinetic) energy the balls run away with, they had to obtain it from the kinetic energy that the cart had coming into the collision. Now, the kinetic energy of the cart at the point of the collision was converted from the gravitational potential energy that the cart had higher up the ramp. But that conversion was done by the cart alone; the balls had nothing to do with it. The other reason I don't like this problem is that they don't tell you at which point on the ramp the cart has the speed of $5\text{ m/s}$. It's possible that the cart maintains a constant velocity as it goes down the incline, but that would require some mechanism to keep the cart from accelerating, and if some such mechanism is involved, it should be mentioned in the problem. If that is the case, the gravitational potential energy that the cart started out with would have been converted into some other form of energy, not kinetic. It might be heat, electricity, spring energy, etc. but there's no way to know unless they tell you what mechanism is keeping the cart from accelerating. In a pinch, if you encountered this problem on the test and didn't have any opportunity to ask for clarification, I would just assume that $5\text{ m/s}$ is the speed at the end of the ramp, immediately prior to when the cart hits the balls. Why? The alternative is that the problem is unsolvable. If the speed of the cart coming into the collision is not $5\text{ m/s}$, you have no other information that would allow you to calculate what it is. (Self-check: do you understand why this is the case?) Once you assume that the speed of the cart coming into the collision is $5\text{ m/s}$, you have a collision of 3 objects, each of which has a mass and initial and final velocities. All 3 masses, all 3 initial velocities, and two of the final velocities are known, so you should have enough information to solve for the third. If you don't find any solution, then the situation is impossible and the answer is (D); on the other hand, if you do find a solution for the final velocity of the cart, then that velocity will distinguish between choices (A) ($v_f = 0$), (B) ($v_f < 5\text{ m/s}$), and (C) ($v_f = 5\text{ m/s}$, if you ignore the stuff about energy being converted). - Thanks! That's a very through explanation. Another thing to mention: I discussed this problem with a friend and he said that the problem's scenario should be impossible. Assuming the two balls are initially at rest, no matter what shape the front of the cart, it is impossible for the cart to hit the balls and have the balls recoil exactly perpendicularly to the impact. the only way this would be possible is if the balls had some initial velocity, with a component anti-parallel to the direction of the cart, that got exactly cancelled by part of the cart's momentum along that direction. – jpan Jan 8 '12 at 4:29 (continued) This results in the balls resulting net momentum being exactly perpendicular to the cart's (now reduced) momentum along its original direction. I was wondering what your thoughts are on this. – jpan Jan 8 '12 at 4:30 Yeah, I had the same thought. I didn't mention it because I figured that among the various things that are wrong with this problem, that's one of the less important ones, but your friend's argument seems perfectly legit. – David Zaslavsky♦ Jan 8 '12 at 4:44 The correct answer is D. It's not clear to me whether C is an officially sanctioned answer, or whether the author of the book this question was in just collected the questions and figured out the answers by himself (in this case, incorrectly). I suspect the second. – Peter Shor Jan 8 '12 at 15:08 If you consider ball A and ball B, they each have a momentum of 4 kg m/s , but in opposite directions, for a total momentum of 0. This means that the velocity of the cart cannot be changed by the collision, from the law of conservation of momentum. This in turn means that the kinetic energy is increased by the collision, violating the law of conservation of energy. The correct answer should be D. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9601397514343262, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/267413/determine-stabilizer-of-an-edge-of-the-cube-and-its-orbit
# determine stabilizer of an edge of the cube and its orbit Let $G$ be the group of symmetries of the cube, and consider the action of $G$ on the set of edges of the cube. Determine the stabilizer of an edge and its orbit. Hence compute the order of $G$. The symmetric group $S_n$ acts on the set $X=\{1,2,3,\ldots,n\}$, and hence acts on $X\times X$ by $g(gx,gy)$. Determine the orbits of $S_n$ on $X\times X$. note: When I approach the first part, it should be obvious that the orbit contains 12 elements since and edge can go to any other edge through rotation and combinaton of rotation, but I don't know how to show this in an organised way. e.g if I decided to use the axes though the centre of opposite faces of the cube as my rotational axes and call them a, b, c, how can I describe the type of rotation that send the origial edge to others using a b c so that each case is considered and no repetition? And the same question for stabilizer. It seems to me that many rotations can eventually send the edge back to its original place, but I don't know how to categorise them and recognize those ones that are essentially the same in property. p.s It may be a bit troublesome but I would be very thankful if you can answer with a sketch of the cube so I can understand better. - I assume what you meant by `X*X` was $X\times X$? If so, do you mean that $G$ acts on $X\times X$ by $$g(x,y)=(gx,gy)\quad ?$$ – Zev Chonoles♦ Dec 30 '12 at 2:58 @ZevChonoles yes that's what I mean – Neptune Dec 30 '12 at 15:12 ## 1 Answer It's enough to show that any edge can be rotated into one of the neighbouring edges; by composing such operations, you can move any edge to any other edge. To rotate an edge into a neighbouring edge, rotate through $2\pi/3$ about an axis through the vertex they share. For an edge to be rotated into itself, the axis has to pass through the centre of the edge and the angle has to be $\pi$; this is the only kind of rotation that leaves a line segment invariant, other than a rotation about an axis along the line segment, which isn't an option in this case. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483838677406311, "perplexity_flag": "head"}
http://mathoverflow.net/questions/19020/brownian-motion-and-spheres/19117
## Brownian motion and spheres ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider a Brownian motion on $[0;1]$. A (finite) discrete approximation of this Brownian motion consists of $N$ iid Gaussian random variables $\Delta W_i$ of variance $\frac{1}{N}$: $$W\left(\frac{k}{N}\right) = \sum_{i=1}^k \Delta W_i.$$ The vector $V_N = (\Delta W_1, \ldots, \Delta W_N) \in \mathbb{R}^N$ has a norm approximately equal to $1$ since the random variable $\|V_N\|^2$ has a variance equal to $\frac{C}{N}$ and $\frac{C}{N} \to 0$ (basic concentration of measure results can make this statement more precise). This is why (?) one can approximately say that in order to sample a Brownian path, it suffices to sample a point uniformly on the unit sphere of $\mathbb{R}^N$. Question: letting $N$ go to $\infty$, how can one formalize (if possible and/or correct) the idea that a Brownian path on $[0;1]$ is like a point uniformly chosen on the unit sphere of an infinite dimensional Banach space ? - ## 5 Answers As you suggest in the question, there is no such thing as a uniform measure on the unit sphere of infinite dimensional Banach spaces, such as $L^2\equiv L^2([0,1],\lambda)$ (λ=Lebesgue measure). Instead, you can look at cylinder measures, which are defined on locally convex vector spaces. Given a locally convex vector space V, a cylinder measure on V is really a collection of measures $\lbrace\mu_W\rbrace$ on the quotients V/W, for closed subspaces W with finite codimension. These must be consistent with the natural maps $V/W\to V/W^\prime$ when $W\subseteq W^\prime$. Equivalently, a cylinder measure can be expressed a collection of measures $\lbrace\mu_F\rbrace$ for continuous linear maps $F\colon V\to\mathbb{R}^n$ (n=0,1,2,...), where $\mu_F$ is a measure on the codomain $\mathbb{R}^n$ of F. Then, the consistency condition says that if $F\colon V\to\mathbb{R}^n$, $g\colon\mathbb{R}^n\to\mathbb{R}^m$ are linear continuous, then $\mu_{g\circ F}=\mu_F\circ g^{-1}$. For finite dimensional spaces, cylindrical measures are just the same thing as measures on $(V,\mathcal{B}(V))$, since they are equivalent to specifying a measure on $V/\lbrace0\rbrace=V$ Writing $V^\star$ for the dual of V, cylinder measures can also be considered as measures on the infinite product space $\Omega\equiv\prod_{x\in V^\star}\mathbb{R}$, consisting of functions $\omega\colon V^\star\to\mathbb{R}$. Letting $X\colon V^\star\times\Omega\to\mathbb{R}$ be the "coordinate process" $X(x)(\omega)\equiv\omega(x)$, write $\mathcal{F}$ for the sigma-algebra generated by $\lbrace X(x)\colon x\in V^\star\rbrace$ (the sigma-algebra generated by finite dimensional distributions). Any measure on $(\Omega,\mathcal{F})$ such that the linearity condition $X(ax+by)=aX(x)+bX(y)$ is satisfied almost everywhere (for each $x,y\in V^\star$, $a,b\in\mathbb{R}$) defines a cylinder measure by restricting to finite subsets of $V^\star$. Conversely, by the Kolmogorov extension theorem, a cylinder measure uniquely extends to such a measure on $(\Omega,\mathcal{F})$. I'll also refer to such measures on $(\Omega,\mathcal{F})$ as cylinder measures. The situation where V is a Hilbert space is simpler. By orthogonal projection, V/W is naturally isomorphic to the orthogonal complement of W, for any closed subspace $W\subseteq V$, so a cylinder measure is just a collection of measures on the finite dimensional subspaces V, consistent with orthogonal projection. Also, the inner product allows us to identify V with its dual. For any $x\in V$, I'll write $\langle X, x\rangle\equiv X(x)$. On any Hilbert space, there is a unique canonical Gaussian (cylinder) measure, with respect to which $\langle X,x\rangle$ is normal with mean 0 and variance $\Vert x\Vert^2$. For an infinite dimensional space, if $e_1,e_2,\ldots$ is an orthonormal basis, then $\Vert X\Vert^2=\sum_n\langle X,e_n\rangle^2$ will be infinite. Instead, $$n^{-1}\left(\langle X,e_1\rangle^2+\cdots+\langle X,e_n\rangle^2\right)\to 1$$ with probability one, which holds for any orthonormal sequence $e_1,e_2,\ldots$ (not necessarily a basis). For example, if W is a standard Brownian motion on the unit interval, then the derivative $\dot W=dW/dt$ does not exist in the usual way. However, it can be considered as having a cylindrical distribution on the Hilbert space $L^2([0,1],\lambda)$, $$\langle\dot W,x\rangle=\int_0^1 x(t)\dot W(t)\,dt = \int_0^t x(t)\,dW(t)$$ where the right hand side side is understood as a Wiener or Ito integral, and is normal with mean 0 and variance $\Vert x\Vert^2$. So, $\dot W$ has the canonical Gaussian distribution. In some ways, the canonical Gaussian measure on infinite dimensional spaces plays a similar role to the uniform measure on unit spheres in finite dimensional spaces. Consider the following, fairly basic statements about measures on a finite dimensional Hilbert space V which are invariant under orthogonal transformations: There is a unique uniform probability measure on the unit sphere $S=\lbrace x\in V\colon\Vert x\Vert=1 \rbrace$, say, $\mu_S$. That is, $\mu_S$ is invariant under orthogonal transformations. Then, if X is any random variable taking values in V whose distribution is invariant under orthogonal transformations, $\hat X\equiv X/\Vert X\Vert$ will have distribution $\mu_S$ (conditioned on $X\not=0$). Then, letting $\mu_R$ be the distribution of $R=\Vert X\Vert$ on $[0,\infty)$, the distribution of X is of the form $$\mathbb{P}(X\in A)=\int_0^\infty\int_S1_{\lbrace rx\in A\rbrace}\,d\mu_S(x)\,d\mu_R(r)$$ So, any distribution which is invariant under orthogonal transformations splits up into an integral over the uniform distributions on spheres. The cylinder probability measures invariant under orthogonal transformations on an infinite dimensional Hilbert space V split up in a similar way, if we replace "uniform distribution on the unit sphere" by "canonical Gaussian distribution". Suppose that the coordinate process X on $(\Omega,\mathcal{F})$, defined as above, has such a distribution. So, $\langle AX, x\rangle\equiv\langle X,A^tx\rangle$ has the same distribution as $\langle X,x\rangle$, for each orthogonal transformation A. Then, there is a nonnegative random variable R such that $$n^{-1}\left(\langle X,e_1\rangle^2+\cdots+\langle X,e_n\rangle^2\right)\to R^2$$ almost surely, as n tends to infinity. This holds for any orthonormal sequence $e_1,e_2,\ldots$ in V. Conditioned on $\lbrace R=0\rbrace$, $\langle X,x\rangle=0$ almost surely, for each x. Also, conditioned on the set $\lbrace R\not=0\rbrace$, X/R has the canonical Gaussian measure. I'm not sure about a standard reference for this (I'll have a look), but it's not too difficult to prove. Given an orthonormal sequence $e_1,e_2,\ldots$, write $X_n\equiv\langle X,e_n\rangle$. The joint distribution of the random variables $X_1,X_2,\ldots$ is invariant under permuting their order. Letting $\mathcal{F_\infty}$ be the tail sigma-algebra, martingale convergence can be used to show that $R_n^2\equiv n^{-1}(X_1^2+X_2^2+\cdots+X_n^2)$ converges almost surely to $R^2=E[X_1^2\mid\mathcal{F_\infty}]$ which could, potentially, be infinite. Also, by applying an orthogonal transformation, R will not depend on the choice of orthonormal sequence. Conditioning on $\lbrace R=0\rbrace$, $E[X_1^2\mid R=0]=0$, so X=0. Conditioning on $\lbrace R_n\not=0\rbrace$, $(X_1,...,X_n)/R_n$ has the uniform distribution on the sphere of radius $\sqrt{n}$ in $\mathbb{R}^n$. From what we know about the uniform distribution on the n-sphere, $X_1/R=\lim_{n\to\infty}X_1/R_n$ has the standard normal distribution. So, $R\not=\infty$ and X/R has the canonical Gaussian distribution. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let $\xi_n$ be a random $n$-dimensional vector with uniform distribution on the sphere {$x\in \mathbb{R}^n: |x|=1$}, $|\cdot|$ being Euclidean norm. It is well-known that the projections of $\sqrt{n}\xi_n$ on any finite number $m$ of coordinates is asymptotically Gaussian with standard independent components. I suspect that one can modify the proof of this fact to conclude that the continuous process $X_n$ on $[0,1]$ defined by $X(k/n)=\sum_{j=1}^k\xi_{n,j}$ for times of the form $k/n$ and linearly interpolated in between, converges in distribution in $C[0,1]$ to the Wiener process (one needs some maximal inequalities, of course, to prove tightness). If it is true, it should be well-known, too. As for a direct statement, I am not that sure. The "natural" limit of the unit balls you described is the Strassen ball in the Cameron-Martin space that consists of all absolutely continuous functions $x:[0,1]\to \mathbb{R}$ such that $x(0)=0$ and $\int_0^1 \dot x^2(s)ds=1$. Although this ball is tightly related to the Wiener measure, its Wiener measure is 0. Many books on Gaussian measures or Malliavin calculus discuss these issues. - thanks - I also had in mind something related to the Cameron-Martin space, but I must admit that I am still not entirely satisfied. Of course, there might not exist any proper way to formalize this heuristic. PS: and of course, the result that you mentioned is true. – Alekk Mar 23 2010 at 22:46 @Alekk: Do you happen to know a reference where this Functional CLT is stated/proved? – Yuri Bakhtin Mar 29 2010 at 16:57 H.P. McKean wrote a fun paper on this topic. He argues that one can think of Wiener measure as uniform measure on an infinite-dimensional sphere with radius "$\sqrt{\infty}$". See http://www.jstor.org/pss/2959482 for a copy of the paper. - Hi Alekk you might have a look at this book and the references therein (search for the word "sphere" in the "Look Inside" functionality of some famous online bookstore (or page 60)) "Stochastic Analysis in Discrete and Continuous Setting" Lecture Notes in Mathematics by Nicolas Privault - Hi Alekk, it's not clear to me why the discretized Brownian motion is well approximated by the uniform measure on the sphere. Is there any rotational symmetry associated with the distribution of the increment vectors? But I really like the connection between a noncompact object and a compact object, in a way that does not invoke normalization. - this is based on the fact that if G is a Gaussian variable in R^n with independent coordinates then G/|G| is uniformly distributed on the unit sphere of R^n. – Alekk Mar 23 2010 at 22:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 97, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9266976714134216, "perplexity_flag": "head"}
http://divisbyzero.com/2010/02/04/why-do-mirrors-reverse-right-and-left-but-not-up-and-down-2/?like=1&_wpnonce=3a725eea8d
# Division by Zero A blog about math, puzzles, teaching, and academic technology Posted by: Dave Richeson | February 4, 2010 ## Why do mirrors reverse right and left but not up and down? [I apologize to those of you who have been reading my blog for more than a year. I'm reposting something I wrote last year at this time. I was then, and am now, teaching Calculus III, and we just finished discussing the cross product. I ended the conversation by telling my classes how the cross product helps us answer the question: why do mirrors reverse right and left but not up and down?] Stand in front of a mirror and hold up your right hand. The person standing in the mirror holds up her left hand. Why is that? Why does a mirror reverse left and right? After all, it does not reverse up and down. Before we answer that question, we have to ask a more basic one: what is “the right”? Looking for an answer, I turned to the venerable Oxford English Dictionary (subscription required). I was disappointed to discover that the OED does not give a definition of the term “right”! Instead it gives the circular path shown below. Right 17. a. = RIGHT HAND 2. Right hand 2. a. The right side. b. The direction towards the right. = RIGHT n.1 17a. “Right” and “left” are a slippery concepts that are hard to define. In fact, you need to know other things about an object before you can determine its right and left. For example, if I handed you a blob-like sea creature and asked you which side is its right side, you may not be able to answer me. If I told you where the top and front sides of the critter were, then you could quickly identify the right side. Here’s a mathematical explanation of what you would be doing mentally. Take the coordinate axes shown below, point the z-axis out of the top of the creature and the y-axis out its front, then the x-axis will point to its right. If you are familiar with vector operations in $\mathbb{R}^3$, consider this procedure. Take a vector pointing out its top $\mathbf{v}_{\text{top}}$ and a vector pointing out its front $\mathbf{v}_{\text{front}}$. Then the cross product of $\mathbf{v}_{\text{front}}$ and $\mathbf{v}_{\text{top}}$ is a vector pointing to its right, $\mathbf{v}_{\text{right}}$. That is: $\mathbf{v}_{\text{right}}=\mathbf{v}_{\text{front}}\times\mathbf{v}_{\text{top}}$. “Right equals front cross top“ The three directions, top, front, and right are mutually perpendicular and that if you know two of them, you know the third. For a person, a car, an animal, etc, the top and the front are unambiguous and intuitive. Then we use them to determine which side is the right side. Now let us go back to the mirror. What does it really reverse? If you raise your arm, your reflection raises her arm. If you stick your right arm out to the side, the reflection sticks an arm out in that same direction. However, if you point at the mirror, then the reflection points in the opposite direction—she points back at you. In other words, the mirror reverses front and back! Here’s where things start going wrong. Your brain does not have to do any work to recognize the top and front sides of your reflected image. Then it uses them to calculate your reflection’s right side. More specifically, your reflection’s z-axis points in the same direction as yours, but her y-axis points in the opposite direction (yours points into the mirror and hers points out). Consequently, if we use the xyz-coordinate system above, her x-axis points in the opposite direction as yours. Thus your right arm corresponds to her left arm, and we perceive the mirror reversing left and right. Here another way to describe what is happening. The xyz-coordinate system shown above is often called a right-handed coordinate system because if you take your right hand, point your fingers in the x-direction and curl them in the y-direction, then your thumb will be pointing in the z-direction. What a mirror truly does is changes a right-handed coordinate system into a left-handed one—that is to say, the reflection of a right-handed system is a left-handed system. When we look into a mirror, our brain, which is accustomed to using a right-handed coordinate system to tell right from left, errs because the mirror world actually has a left-handed coordinate system. When we see words in a mirror, they look like they are written from right to left, but that is because we are imposing our right-handed coordinate system on a left-handed mirror world. I’ll end this post with some assorted thoughts about the left and the right. • One thing that occurred to me while writing this post is that we treat different objects differently. Suppose I was holding a piece of paper out in front of me with my two hands and you were facing me. If I told you to point to the right hand side of the paper, then to point to my right hand, you would point to two opposite sides of the paper! There are certain objects (people, animals, cars, boats, etc.) in which right and left refer to the right and left sides from the objects’ perspective. However, there are other objects (pieces of paper, buildings, etc.) in which you use your right and left side to reference it. I assume this has to do with whether we can mentally substitute ourselves in place of the object—we can do that with other living things or with vehicles in which we can ride, but not inanimate objects like a piece of paper. • Perspective is important. As a child, I was always confused about where right field was on a baseball diamond. Is it on the batter’s right or on the fielders’ right? • It is no wonder that so many people confuse their right and their left. They have to compute a cross product in their head each time. • It is a good thing that right and left are relative quantities, otherwise cars driving in opposite directions on a two-way road, both driving on the right-hand side would be in the same lane! • I just came across these two articles about Andrew Hicks’ work with mirrors. ### Like this: Posted in Math, Teaching | Tags: calculus, coordinate system, left, left handed, linear algebra, physics, reflection, right, right handed ## Responses 1. “After all, it does not reverse up and down.” I like to point out that a mirror on the ceiling (or floor) *does* reverse up and down. By: Jeffo on February 4, 2010 at 3:32 pm • Yes, good point. But such a mirror doesn’t reverse front and back. Thus it still reverses right (= front x top). By: Dave Richeson on February 4, 2010 at 3:36 pm • Yes — is there a way to orient a mirror that reverses top and front and not right/left? By: Jeffo on February 4, 2010 at 4:04 pm 2. What a mirror is doing is reflecting light with respect to the surface normal at the point the photon hit it (the surface normal is the cross product of the two vectors that describe the plane of the mirror). Since we have depth perception we will see the reflected light as coming from beyond the mirror, even though it is coming from behind us. The mirror really does just invert the viewing frustum. Nothing is different. Nothing is on a different side. So if it doesn’t actually reverse left and right, why does our brain think it does? We see a person in the mirror and our brain assumes its another person looking at us. That’s when one’s mind’s sense of depth perception plays tricks on one’s sense of direction. Since our brain is saying that person is looking at us, we think that left and right directions are different when they’re not. In other words, we just confuse ourselves because we’re not used to mirrors! Our visual system never evolved to take a mirror into account. So we have to learn it. That’s why when we’re driving and some one puts on a turn signal and we see it in the mirror, we know that they are still turning in the same direction as the side of the car their turn signal was blinking. It’s the same psychology that makes driving a car backwards awkward at first. Then why do images and letterings appear backwards? I’m glad you asked. When one looks at the writing in a mirror one expects its orientation to change just like one thinks a person (who is mostly symmetric, while text is not) is facing the other direction. Automatically, one reads it backwards. Normally, the text’s right is on one’s left. If one is holding the text so they are facing the same way, it will appear backwards. It’s the same reason everything on a storefront window display looks backwards once one is inside the store. If you were out side the store and looking in and there was a mirror inside reflecting the writing in the window, it would be perfectly readable to you in the mirror as well as in the window. If you were in between the window and the mirror, you could read the writing in the mirror, but the writing in the window would be backwards to you. “Ambulance” is written out backwards, not just so that one can read it in you rear view mirror, but so that if the ambulance were transparent you could read the “backwards writing” in its front from behind the vehicle with the text oriented the right way. Aren’t mirror’s wonderful!? By: Jillian on February 5, 2010 at 9:40 am 3. The mirror does not reverse front and back. The mirror presents a two dimensional image. Not having any third dimension, it does not have a front or a back. A mirror does not reverse anything, our minds do this assuming the image is that of another person in front to us. By: DerekSmith on February 5, 2010 at 10:17 am 4. @DerekSmith A mirror may practically be 2d dimensional, but it reflects light that encodes three dimensional information. That’s why one can look in a mirror and still focus on objects as though they were much further in front of oneself (even though in reality they’re behind you). @Dave Richeson I wrote you back on your comment @ my blog entry. Sorry for not being clear with that last statement. I was just augmenting your point. By: Jillian on February 5, 2010 at 12:12 pm 5. A mirror doesn’t reverse anything imagewise. You see “left” and “right” based on where the photons come from in relation to each other and your eye. The photons from your right hand still come from the right. The photons from your left hand still come from the left. It’s when a human stands between you and the mirror, then turns around to face you, that their hands reverse and the photons from their right hand come from your left, and so forth. If the human instead stood on their head to face you, then they would be reversing up and down. But a mirror, which reverses nothing, doesn’t do that. Or write something on a transparency. Hold it up so you can read it directly, then do the same thing standing in front of a mirror. You can read it directly in the mirror. It hasn’t been reversed at all. Now reverse it so it “faces” the mirror. You will see it backwards in the mirror, now, because you reversed it. By: blair on February 8, 2010 at 6:34 pm 6. addendum: …or flip the transparency forward, so it faces the mirror. Now you see it upside down, because you turned it upside down. By: blair on February 8, 2010 at 6:36 pm 7. Is there a simple answer to the question “Why do mirrors reverse right and left but not up and down?” that everyone could understand without having to go to an online dictionary every two seconds? By: DaGeek on April 14, 2010 at 10:39 am 8. I am grateful for the serendipity that brought me to your website. My son had realized that his image in our mirror was not the one other people saw. And wondered if there were ‘true reflection’ mirrors. Thany you for carrying the ‘Light of Truth’, Mathematics. Arne Leon By: Arne on August 20, 2010 at 10:42 am 9. There’s no mind tricks involved really, I guess the light just stays in the same direction. The image may flip, but you have to think about how we actually get to see reflections in mirrors in the first place… Light travels from somewhere and hits your face. Now imagine that your face has been ‘imprinted’ on that wave of light. That light bounces off your face, then into the mirror, then back to your eyes. As it comes back, the image is still the same way around as it was when it first ‘imprinted’ your face. This image is what you see, reflecting back. There’s more scientific ways of describing it, but that is essentially all that happens :) By: John Mirren on October 14, 2011 at 1:23 pm 10. I came here for an extra school question last year. Oh if only John Mirren had posted then. By: DaGeek247 on October 17, 2011 at 9:59 am 11. This is very well explained but again confuses making use of the “right handed” coordinate system. It can be either a left-handed one or a right-handed one. But, the same type is to be used in both the instances. So, the conclusion is that whatever hand you use for defining the cross product, use the same all over!! By: Uday Zingtudor on December 10, 2011 at 9:36 am 12. just turn the mirror upside down duh! By: fakedad on January 6, 2012 at 6:22 pm
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532952904701233, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=177781
Physics Forums ## continued fractions Hi, my question is given the recurrence relation for the convergents, could we construct a continued fraction so.. $$\alpha = a_{0}+ \frac{b_{0}}{a_{1}+\frac{b_{1}}{a_{2}}+....$$ all the coefficients a's and b's are equal to a certain integer ? for example if all the coefficients (numerators and denomiators) * are one we have just the Fibonacci (Golden ratio) constant $$\frac{2}{\sqrt 5 -1}$$ * are two we have exactly $$\sqrt 2 +1$$ i the sense that expanding the 2 numbers above their continued fraction is made only by 1 or 2, but can we construct a general continued fraction with all the numbers equal to 3,4,5,6,...... PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Blog Entries: 2 Quote by Klaus_Hoffmann Hi, my question is given the recurrence relation for the convergents, could we construct a continued fraction so.. $$\alpha = a_{0}+ \frac{b_{0}}{a_{1}+\frac{b_{1}}{a_{2}}+....$$ all the coefficients a's and b's are equal to a certain integer ? for example if all the coefficients (numerators and denomiators) * are one we have just the Fibonacci (Golden ratio) constant $$\frac{2}{\sqrt 5 -1}$$ * are two we have exactly $$\sqrt 2 +1$$ i the sense that expanding the 2 numbers above their continued fraction is made only by 1 or 2, but can we construct a general continued fraction with all the numbers equal to 3,4,5,6,...... Something doesn't seem right. If making all the a's and b's 1 gives the Fibonacci (golden ratio) constant, wouldn't making all the a's and b's 2 simply be 2 times the Golden ration rather than $$\sqrt 2 + 1$$. Postscript. On the other hand: It seemed to me that any similar manner of construction a continued fraction will give an irrational number of some fixed value which would be a multiple of the continued fraction comprising only ones, but then n/n = 1 so I guess I made a mistake. Anyway at least the numbers are completely predefined. Thread Tools | | | | |------------------------------------------|----------------------------------|---------| | Similar Threads for: continued fractions | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 1 | | | Precalculus Mathematics Homework | 3 | | | Precalculus Mathematics Homework | 13 | | | Introductory Physics Homework | 3 | | | Linear & Abstract Algebra | 4 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9207497835159302, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/8715/how-to-find-pixel-position-in-a-graph-given-two-points
# How to find pixel position in a graph , given two points? I have a graph where on my Y axis , I have price and on X axis I have period/time. My range on Y axis is from 10 (lower limit) to 200 (upper limit), so to draw the point on graph I need the Y position of the value for example if my price is 100.50 then how should I calculate Y position of this price . Example: If my upper limit point is (200,HeightOfGraph) that is P1, and the lower limit point is (0,0) is P2, so if height of graph is 100 so if the value is 100 it's y position will be 50 , but how should I calculate it , is there any formula to do this ? - It isn't very clear what you are trying to achieve, `ListPlot` plots points at coordinates given by {x,y} pairs. In what way does this not solve your problem ? – image_doctor Jul 25 '12 at 8:58 – Yves Klett Jul 25 '12 at 9:33 Seems like you only solve tough problems, thanks anyway. – Praveen Jul 25 '12 at 9:41 It is not about tough problems at all. You simply have a much better chance of quickly getting (usually top-flight) answers from users if you pose a question well. – Yves Klett Jul 25 '12 at 9:46 – image_doctor Jul 25 '12 at 9:49 show 1 more comment ## 4 Answers You can find the y value for a given x coordinate for a straight line by creating an interpolation function in Mathematica of order 1. For points {0,0} and {200,h} you would do the following. ````f = Interpolation[{{0, 0}, {200, h}}, InterpolationOrder -> 1] ```` Where `h` is the height of your second point. You can then find the value of y for any x coordinate using, here for the x value 50: ````f[50] ```` You can replace h with the value for your application to get a numeric answer, as in: ````f2 = Interpolation[{{0, 0}, {200, 100}}, InterpolationOrder -> 1] ```` Here, for a value of `h` of 100. ````f2[50] ```` 25 You can plot the values for the function you have created to check that it represents your desired solution: ````Plot[f2[x], {x, 0, 200}] ```` - If you have a number of points you use `ListPlot` with the points' coordinates as a list of arguments. You shouldn't worry about the placing on the graph, that's what Mathematica takes care of. If you want you can change the vertical range by using `PlotRange`. Example: ````tungsten = {{20, 5.5}, {227, 10.5}, {727, 24.3}, {1727, 55.7}, {2727, 90.4}, {3227, 108.5}} (* resistivity in microOhm cm at x °C *) ListPlot[tungsten, PlotRange -> {0, 80}] ```` - If I understand your question correctly (and it isn't entirely clear), then your question is how to plot a line, given two coordinates, one at each end. This is pretty straightforward. ````Graphics[Line[{{0, 0}, {200, height}}] /. height -> 100, Axes -> True] ```` But it seems like you are wanting the equation, so you can work out the intermediate points. The formula I learned in high school was $y = m x + b$, where $m = \frac{rise}{run}$, where $rise$ is the difference between the $y$ coordinates and $run$ is the difference between the $x$ coordinates. ````riseoverrun[{x1_, y1_}, {x2_, y2_}] := With[{m = (y2 - y1)/(x2 - x1)}, Flatten@{m, Solve[m x1 + b == y1, b]}] ```` So you can write: ````riseoverrun[{0, 0}, {200, height}] ```` Which gives: ````{height/200, b -> 0} ```` You can then find the equation: ````First@coords x /. height -> 100 + b /. (Last@coords) (* x/2 *) ```` Or if you prefer, just put the above in the `Plot` function: ````Plot[First@coords x /. height -> 100 + b /. (Last@coords), {x, 0, 100}] ```` - I think you can get what you want from using simple ratios as follows: ````p = (v-v1)/(v2-v1)*(p2-p1) + p1 ```` where: `p` is the pixel position corresponding to the value `v` you want, and `p1` is the pixel position at the bottom of the y axis corresponding to value 'v1', and `p2` is the pixel position at the top of the y axis corresponding to value `v2` - Hello @hay, welcome to Mathematica.SE! I fixed up your missing parenthesis and tweaked some formatting. However, you have both `p1` and `P1` in your answer. Mathematica is case-sensitive. Do you mean for them to be the same? – Verbeia♦ Jul 27 '12 at 3:44 yes, thanks for the parenthesis, and P1 should be p1. I was a bit sloppy with my answer. I didn't realize mathematica was case-sensitivE. – hay Jul 29 '12 at 5:00 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8881402611732483, "perplexity_flag": "middle"}
http://luckytoilet.wordpress.com/tag/prime/
# Lucky's Notes Notes on math, coding, and other stuff ## Is 2011 a special number? January 1, 2011 Today is the first day of the year 2011. 2011 is a prime number. Not only that, but according to numerous math bloggers and twitterers on the internet, 2011 is the sum of 11 consecutive primes — that is, 2011 = 157 + 163 + 167 + 173 + 179 + 181 + 191 + 193 + 197 + 199 + 211. Furthermore (as I already said), 2011 is prime. Naturally I wonder, how rare are these numbers — numbers that are prime but also a sum of a bunch of consecutive primes? This seems like a problem easily solved with some programming. Let us write P(n,k) as the number of primes less than or equal to n that can be written as a sum of at least k consecutive primes. How big is P(2011,k) compared to $\pi(2011) = 305$, the number of primes less than or equal 2011? My method for computing P(n,k) is pretty simple. First we start with a list of prime numbers, then write the cumulative sums for the prime numbers (these images were taken with my laptop camera off a whiteboard): Next take every pair from the bottom list, and find their differences: Because of the way I arranged the numbers, we can see that the bottom diagonal are all the numbers that can be written as a sum of 1 prime (obviously, the prime numbers), then the next row are all numbers that can be written as a sum of 2 primes, and so on. If we want to compute P(n,k) we simply list enough prime numbers to complete this table, take everything above a diagonal, and filter out all the duplicates. Here’s my hopefully correct implementation: ```import java.util.*; import java.lang.*; class Main { public static void main (String[] args) throws java.lang.Exception { int LIM = 2011; int NCONSEC = 3; boolean[] sieve = new boolean[LIM+1]; Arrays.fill(sieve, true); for(int i=2; i<=LIM; i++){ if(!sieve[i]) continue; for(int j=2; j*i<=LIM; j++) sieve[i*j] = false; } List<Integer> primes = new ArrayList<Integer>(); for(int i=2; i<=LIM; i++) if(sieve[i]) primes.add(i); List<Integer> cuml = new ArrayList<Integer>(); cuml.add(0); int cum = 0; for(int p : primes){ cum += p; cuml.add(cum); } Set<Integer> consums = new TreeSet<Integer>(); for(int i=1; i<cuml.size(); i++){ for(int j=0; j<=i-NCONSEC; j++) consums.add(cuml.get(i) - cuml.get(j)); } int p = 0; for(int i : consums){ if(i > LIM) break; if(!sieve[i]) continue; //System.out.println(i); p++; } System.out.println(p); } } ``` It turns out that P(2011,3) = 147 and since there are 305 primes less or equal to 2011, roughly half of all primes under 2011 can be written as a sum of at least 3 consecutive primes. Hardly a special number at all! Even if we insist on a list of 11 consecutive primes, there are still 56 primes less than or equal to 2011 that can be written as a sum of 11 consecutive primes, about 1 in every 5 primes. Happy New Year! 2 Comments | Mathematics, Programming | Tagged: java, prime | Permalink Posted by luckytoilet ## The Sieve of Sundaram April 18, 2010 The Sieve of Eratosthenes is probably the best known algorithm for generating primes. Together with wheel factorization and other optimization options, it can generate primes very quickly. But a lesser well known algorithm for sieving primes is the Sieve of Sundaram. This algorithm was discovered in 1934 by Sundaram; like the sieve of Eratosthenes it finds all prime numbers up to a certain integer. ### The algorithm A simplified version of the algorithm, using N as the limit to which we want to find primes to: ```m =: Floor of N/2 L =: List of numbers from 1 to m For every solution (i,j) to i + j + 2ij < m: Remove i + j + 2ij from L For each k remaining in L: 2k + 1 is prime. ``` In practice we can find solutions to $i + j + 2ij < m$ by using two nested for loops: ```For i in 0 to m: For j in i to m: L[i + j + 2ij] =: False ``` Here i is always less than j, because the two are interchangeable and filtering it twice would be a waste. We don’t actually need to loop j from 0 to m. From the inequality $i + j + 2ij < m$, we can solve for j: $j < \frac{m-i}{2i+1}$. The new algorithm: ```m =: Floor of N/2 L =: Boolean array of length m Fill L with true For i in 0 to m: For j in i to (m-i)/(2i+1): L[i + j + 2ij] =: False For each k remaining in L: 2k + 1 is prime. ``` ### Why this algorithm works In the algorithm, $2k+1$ is prime where k can be written as $i + j + 2ij$ where i and j are integers. We can rewrite this: $\begin{array}{l} 2(i+j+2ij) + 1 \\ = 2i + 2j + 4ij + 1 \\ = (2i+1) (2j+1) \end{array}$ Both $2i+1$ and $2j+1$ are odd numbers, and any number that can be written as the product of two odd numbers are composite. Of the odd numbers, those that cannot be written as the product of two odd numbers are prime. We’ve filtered everything that can be written as $(2i+1)(2j+1)$ so we are left with the odd prime numbers. This algorithm only gets the odd prime numbers, but fortunately there is only one even prime number, 2. ### Benchmarks with the Sieve of Eratosthenes Here’s an implementation of the Sieve of Sundaram: ```#include <stdio.h> #include <stdlib.h> typedef unsigned long long ll; int main() { ll n = 100000000LL; ll m = n/2; char *sieve = malloc(m); for(ll i=0; i<m; i++) sieve[i] = 1; for(ll i=1; i<m; i++) for(ll j=i; j<=(m-i)/(2*i+1); j++) sieve[i+j+2*i*j] = 0; ll s=1; for(ll i=1; i<m; i++) if(sieve[i]) s++; printf("%llu", s); return 0; } ``` This code counts the number of primes below 100 million, which should be 5761455. The above code runs in 9.725 seconds. Here’s an alternative, an implementation of the more standard Sieve of Eratosthenes: ```#include <stdio.h> #include <stdlib.h> typedef unsigned long long ll; int main(){ ll lim = 100000000LL; char *sieve = malloc(lim); for(int i=0; i<lim; i++) sieve[i] = 1; int s=0; for(int i=2; i<lim; i++){ if(sieve[i]){ s++; for(int j=2; j<=lim/i; j++) sieve[i*j] = 0; } } printf("%d", s); return 0; } ``` I was surprised to find that the Sieve of Eratosthenes actually ran faster. It completed in 7.289 seconds. I expected the Sieve of Sundaram to be faster because according to Wikipedia this algorithm uses $O(n \log(n))$ operations, while the Sieve of Eratosthenes uses $O(n \log(n) \log \log(n))$. 12 Comments | Mathematics, Programming | Tagged: algorithm, c, number theory, prime, sieve, sundaram | Permalink Posted by luckytoilet
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 12, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922356903553009, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/127244/what-are-the-specifics-and-the-possible-outputs-of-pollards-rho-algorithm/127257
# What are the specifics and the possible outputs of Pollard's Rho algorithm? I'm trying to implement a simple prime factorization program (for Project Euler), and want to be able to use Pollard's Rho algorithm. I read the Wikipedia, wolfram|alpha, and planet math explanations of this algorithm. I think I understand the steps, but I don't fully understand the underlying math. How does this algorithm work? Also, I tried to do it by hand and was unable to get an answer for some numbers, such as 9. Why is that? Lastly, how do I pick the polynomial function to use? is `f(x) = (x^2 + a) mod n` all that is necessary? Note: I'm a freshman in High School, and I've taken up to Pre-Calculus, so that's my level. Edit: This is what I wrote in ruby; However, for a lot of numbers it just gives back that same number. (like entering 9 returns 9) ```def rho(n) if n%2 == 0 return 2 end x = Random.rand(2...(n-3)) y = x c = y g = 1 while g==1 x = ((x*x)%n + c)%n y = ((y*y)%n + c)%n y = ((y*y)%n + c)%n g = (x-y).gcd n end return g end number = gets.chomp.to_i puts rho(number) ``` - ## 2 Answers Pollard's Rho algorithm is essentially a smart way to guess possible factors of a number - the $x$ values are generated with a kind of pseudo-random number generator, and by the construction, it is fairly likely to find a factor. There is no hard-and-fast guarantee here, since it essentially relies on a probabilistic argument, but it is proven empirically. Your choice of $f$ is somewhat uncommon and may be related to the problem you are experiencing with finding only trivial factorizations. I have good experience with simply using $f(x) = (x^2 + 1) \mod n$, and retrying with $(x^2+2) \mod n$, $(x^2 + 3) \mod n$ etc. if this fails. - Take $n=9$, and say $x=4$. First time through the loop, $x=4^2+4=2$, $y=2^2+4=8$ (all computations reduced modulo 9), $\gcd(x-y,9)=3$, problem solved. - Why does it work with `x = 4, y = 2` but not with `x = 4, y = 3`? – Vishnu Apr 3 '12 at 1:20 Huh? Your program starts by setting $y$ equal to $x$, so the way you have set it up you don't get to say $x=4,y=2$. What do you mean? – Gerry Myerson Apr 3 '12 at 1:44 Wait... What is your function (f)? – Vishnu Apr 3 '12 at 1:59 I'm following the code you posted, insofar as I understand it. First you choose $x$ at random between 2 and $n-3=6$; I chose 4. Then you set $y$ equal to $x$, and $c$ equal to $y$, so $c=4$. Then you are iterating the function $f(t)=t^2+c$, so that's $t^2+4$. – Gerry Myerson Apr 3 '12 at 3:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489755630493164, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-math-topics/208912-least-upper-bound-proof-print.html
# Least Upper Bound Proof Printable View • December 2nd 2012, 12:51 PM JBrandon Least Upper Bound Proof IF A is a non-empty bounded set of real numbers with least upper bound L and p is a positive real number, then the set pA has least upper bound pL. definition i have to use: The number q is an UPPER BOUND for the set A if x<=q for all x in A. The LEAST UPPER BOUND of the nonempty set A is L if L is an upper bound for A and if r is a number less than L, then there is a member a in A so that a > r. what i have so far. Since A is non empty and the least upper bound of A is L, then a<=L for all a in A. Since a<=L for all a in A and p is positive, then pa<=pL. therefor pL is a upper bound of pA. now im lost on the second part of this problem where i need to prove that pL is the least upper bound of pA. my initial thought was to just use the same trick as above but this feels flawed for some reason. since L is the least upper bound of A, then r < a <=L where r is a number less than L. since r < a <=L where r is a number less than L, and p is positive, then rp < pa <= pL. since rp < pa <= pL, and rp is a number less than pL, then pL is the least upper bound of pA. • December 2nd 2012, 01:06 PM Plato Re: Least Upper Bound Proof Quote: Originally Posted by JBrandon IF A is a non-empty bounded set of real numbers with least upper bound L and p is a positive real number, then the set pA has least upper bound pL. Since A is non empty and the least upper bound of A is L, then a<=L for all a in A. Since a<=L for all a in A and p is positive, then pa<=pL. therefor pL is a upper bound of pA. now im lost on the second part of this problem where i need to prove that pL is the least upper bound of pA. Let $\lambda=\text{LUB}(pA)$. Suppose that $\lambda<pL$. (We know that $\lambda\le pL$ WHY?) It follows that $\frac{\lambda}{p}<L$ so $(\exists a'\in A)\left[\frac{\lambda}{p}<a'\le L\right]$ WHY? How is that a contradiction? • December 2nd 2012, 01:55 PM JBrandon Re: Least Upper Bound Proof Quote: Originally Posted by Plato Let $\lambda=\text{LUB}(pA)$. Suppose that $\lambda<pL$. (We know that $\lambda\le pL$ WHY?) since i know pL is a upper bound of pA the least upper bound of pA must be less than or equal to pL. Quote: Originally Posted by Plato It follows that $\frac{\lambda}{p}<L$ so $(\exists a'\in A)\left[\frac{\lambda}{p}<a'\le L\right]$ WHY? since i know L is the least upper bound of A, then $\frac{\lambda}{p}<a'$ Quote: Originally Posted by Plato How is that a contradiction? not to sure on this one. is it because $\frac{\lambda}{p}$ is supposed to equal L but by the proof is less than L? • December 2nd 2012, 01:59 PM Plato Re: Least Upper Bound Proof Quote: Originally Posted by JBrandon not to sure on this one. is it because $\frac{\lambda}{p}$ is supposed to equal L but by the proof is less than L? Well that implies that $\lambda<pa'$. Can that be? • December 2nd 2012, 04:52 PM JBrandon Re: Least Upper Bound Proof so the full proof should be. Let $A \neq \phi$, $L=\text{LUB}(A)$, $a \in A$, $pa \in pA$, and p be a positive number. Since $A \neq \phi$, and $L=\text{LUB}(A)$, then $a \le L$. Since $a \le L$, then $pa \le pL$. therefor $pL=\text{UB}(pA)$. Assume $\lambda=\text{LUB}(pA)$ since $\lambda=\text{LUB}(pA)$, and $pL=\text{UB}(pA)$, then $\lambda \le pL$. since $\lambda \le pL$, then $\frac{\lambda}{p} < L$ since $\frac{\lambda}{p} < L$, and $L=\text{LUB}(A)$, then $\frac{\lambda}{p} < a \le L$ since $\frac{\lambda}{p} < a \le L$, then $\lambda < pa \le pL$. a contradiction as $pa \le \lambda$ must be true for $\lambda=\text{LUB}(pA)$ therefor $pL < \lambda$ and $pL = \text{LUB}(pA)$ • December 2nd 2012, 05:42 PM Plato Re: Least Upper Bound Proof Quote: Originally Posted by JBrandon so the full proof should be. Let $A \neq \phi$, $L=\text{LUB}(A)$, $a \in A$, $pa \in pA$, and p be a positive number. Since $A \neq \phi$, and $L=\text{LUB}(A)$, then $a \le L$. Since $a \le L$, then $pa \le pL$. therefor $pL=\text{UB}(pA)$. Assume $\lambda=\text{LUB}(pA)$ since $\lambda=\text{LUB}(pA)$, and $pL=\text{UB}(pA)$, then $\lambda \le pL$. since $\lambda \le pL$, then $\frac{\lambda}{p} < L$ since $\frac{\lambda}{p} < L$, and $L=\text{LUB}(A)$, then $\frac{\lambda}{p} < a \le L$ since $\frac{\lambda}{p} < a \le L$, then $\lambda < pa \le pL$. a contradiction as $pa \le \lambda$ must be true for $\lambda=\text{LUB}(pA)$ therefor $pL < \lambda$ and $pL = \text{LUB}(pA)$ Honestly, I don't know how to answer you questions. I taught analysis for more than 35 years. I drilled into students that if $\lambda=\text{LUB}(A)$ then for any $c>0$ then it must be the case that $(\exists t\in A)[\lambda-c<t\le \lambda]$. Oddly enough, that turns out to be the most important idea in analysis. So if $\lambda<pa'$ because $pa'\in pA$ that must be a contradiction because $pa'\le\lambda~.$ All times are GMT -8. The time now is 05:54 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 70, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397034645080566, "perplexity_flag": "head"}
http://stochastix.wordpress.com/2012/11/22/herons-method-using-integer-arithmetic/
# Rod Carvalho ## Heron’s method using integer arithmetic Suppose we are given a positive rational number $y$, whose square root $\sqrt{y}$ we would like to compute. To be more precise, we would like to compute an approximation of $\sqrt{y}$. Let $x_k \neq 0$ be an approximation of $\sqrt{y}$. Hence, we have that $\sqrt{y}$ is the geometric mean of $x_k$ and $y / x_k$. If we replace the geometric mean of these two numbers by their arithmetic mean, we hopefully obtain a “better” approximation $x_{k+1}$, as follows $x_{k+1} = \displaystyle\frac{1}{2} \left( x_k + \frac{y}{x_k}\right)$ If all goes well, then $x_N$ will be “sufficiently close” to $\sqrt{y}$ for a “sufficiently large” integer $N$. This algorithm is the famous Heron’s method, devised by Heron of Alexandria some 2000 years ago. Some call it the Babylonian method, which suggests that the algorithm may be some 4000 years old. It’s not merely vintage, it’s antique. Nonetheless, this is not a post on archaeology. We don’t want to sit around and talk about Heron’s method, we want to actually implement it. What programming language should we use? Let us use Haskell. How are we going to represent $y$ and the sequence of approximations $(x_k)_{k \in \mathbb{N}_0}$? __________ Heron’s method using floating-point arithmetic Though millions of people have already implemented Heron’s method using floating-point arithmetic (I implemented it in C in late 2000), one more implementation would do no harm: ```y :: Floating a => a y = 2 -- define Heron map g :: Floating a => a -> a g x = 0.5 * (x + y/x) -- initial approximation x0:: Floating a => a x0 = 1 -- list of approximations xs :: Floating a => [a] xs = iterate g x0``` Let us load this script into GHCi. Here’s a GHCi session: ```*Main> take 6 xs [1.0,1.5,1.4166666666666665,1.4142156862745097, 1.4142135623746899,1.414213562373095] *Main> let x5 = xs !! 5 *Main> x5**2 - 2 -4.440892098500626e-16``` We thus have the following approximation after five iterations $x_5 = 1.414213562373095 \approx \sqrt{2}$ __________ Heron’s method using rational arithmetic Let us now use rational numbers of arbitrary precision. The following script uses the Data.Ratio library: ```import Data.Ratio y :: Rational y = 2 -- define Heron map g :: Rational -> Rational g x = 0.5 * (x + y/x) -- initial approximation x0:: Rational x0 = 1 % 1 -- list of approximations xs :: [Rational] xs = iterate g x0``` Let us load this script into GHCi. Here’s a GHCi session: ```*Main> take 6 xs [1 % 1,3 % 2,17 % 12,577 % 408,665857 % 470832, 886731088897 % 627013566048] *Main> let x5 = xs !! 5 *Main> x5*x5 - 2 1 % 393146012008229658338304 *Main> 1 / 393146012008229658338304 2.5435842395854372e-24``` Note that the error is eight orders of magnitude smaller than in the previous case (where we used finite-precision floating-point numbers). We pay for more accuracy with more computation. Thus, after five iterations, we have the following rational approximation $x_5 = \displaystyle\frac{886731088897}{627013566048} \approx \sqrt{2}$ This is probably not news to you, my dear reader. Last April, I did, in fact, implement Heron’s method under the disguise of Newton’s method (see the appendix). __________ Heron’s method using integer arithmetic A rational number $y \in \mathbb{Q}$ can be expressed as a fraction $a / b$, where $a$ and $b$ are integers and $b \neq 0$. Note that $a / b$ can (obviously?) be represented as a pair of integers $(a,b)$. Since Haskell has arbitrary-precision integers, we can implement Heron’s method using pairs of arbitrary-precision integers to represent $y$ and the sequence of approximations $(x_k)_{k \in \mathbb{N}_0}$. Let $y := a / b$ and $x_k := p_k / q_k$. The recurrence relation $x_{k+1} = \displaystyle\frac{1}{2} \left( x_k + \frac{y}{x_k}\right)$ can thus be rewritten in the following form $\displaystyle\frac{p_{k+1}}{q_{k+1}} = \frac{1}{2} \left( \frac{p_k}{q_k} + \frac{a}{b} \frac{q_k}{p_k} \right) = \frac{b \, p_k^2 + a \, q_k^2}{2 b \, p_k \, q_k}$ We then have two coupled recurrence relations $\left[\begin{array}{c} p_{k+1}\\ q_{k+1}\end{array}\right] = \left[\begin{array}{c} b \, p_k^2 + a \, q_k^2\\ 2 b \, p_k \, q_k\end{array}\right]$ where $a$, $b$, $p_k$ and $q_k$ are integers. Here is a Haskell script that implements this vector recurrence relation: ```a, b :: Integer a = 2 b = 1 -- define Heron map g :: (Integer,Integer) -> (Integer,Integer) g (p,q) = (b * p^2 + a * q^2, 2 * b * p * q) -- initial approximation p0, q0 :: Integer p0 = 1 q0 = 1 -- list of approximations xs :: [(Integer,Integer)] xs = iterate g (p0,q0)``` Let us load this script into GHCi. Here’s a GHCi session: ```*Main> take 6 xs [(1,1),(3,2),(17,12),(577,408),(665857,470832), (886731088897,627013566048)] *Main> let x5 = xs !! 5 *Main> x5 (886731088897,627013566048)``` Hence, after five iterations, we obtain the very same rational approximation we obtained in the previous case (where we used rational numbers of arbitrary-precision) $x_5 = \displaystyle\frac{886731088897}{627013566048} \approx \sqrt{2}$ It should be noted, however, that this implementation using arbitrary-precision integers is different in one very important aspect from the one using arbitrary-precision rational numbers. What is the difference? Let GHCi answer this question: ```*Main> import Data.Ratio *Main Data.Ratio> 2 % 2 1 % 1 *Main Data.Ratio> 4 % 4 1 % 1 *Main Data.Ratio> 3 % 6 1 % 2``` The main difference between the two implementations is that when we use arbitrary-precision rational numbers, there is automatic reduction of every fraction to its irreducible form, which requires computing the greatest common divisor (GCD) of the numerator and denominator of each rational number at each iteration. This has a computational cost, of course, but it also has the benefit of avoiding reducible fractions (which are inefficient representations of rational numbers). When we use pairs of arbitrary-precision integers, we do not compute the GCD (although we could, if we wanted to), but we may have to work with pairs of unnecessarily long integers (that correspond to reducible fractions), which is potentially dangerous. For example, suppose that we replace the first three lines in the script above with the following three lines: ```a, b :: Integer a = 2000 b = 1000``` We run the script again. Here’s yet another GHCi session: ``` *Main> take 6 xs [(1,1),(3000,2000),(17000000000,12000000000), (577000000000000000000000,408000000000000000000000), (665857000000000000000000000000000000000000000000000, 470832000000000000000000000000000000000000000000000), (8867310888970000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000, 62701356604800000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000)] *Main> let x5 = xs !! 5 *Main> x5 (8867310888970000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000, 62701356604800000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000)``` Frankly, I am shocked! Multiplying $a$ and $b$ by $1000$ results in $p_5$ and $q_5$ being multiplied by $10^{90}$. I may have made a mistake counting all those zeros, but it’s a lot more zeros than I expected. __________ Appendix: from Heron to Newton Interestingly, Heron’s method can be derived from the no less famous Newton’s method. Let us introduce a function $f : \mathbb{R} \to \mathbb{R}$, defined by $f (x) = x^2 - y$. By construction, the zeros of function $f$ are $\sqrt{y}$ and $-\sqrt{y}$. Recall that Newton’s method uses the recurrence relation $x_{k+1} = g ( x_k )$, where function $g : \mathbb{R} \to \mathbb{R}$ is defined by $g (x) = x - \displaystyle\frac{f (x)}{f' (x)}$ where $f'$ is the first derivative of $f$. Hence, we obtain the following $g (x) = x - \displaystyle\frac{f (x)}{f' (x)} = x - \displaystyle\left(\frac{x^2 - y}{2 x}\right) = \displaystyle\frac{1}{2} \left( x + \frac{y}{x}\right)$ which is the recurrence relation used in Heron’s method. ### Like this: Tags: Arbitrary-Precision Arithmetic, Data.Ratio, Haskell, Heron’s Method, Integer Arithmetic, Newton’s Method, Numerical Methods, Rational Approximations, Rational Arithmetic This entry was posted on November 22, 2012 at 18:21 and is filed under Haskell, Numerical Methods. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 3 Responses to “Heron’s method using integer arithmetic” 1. Simon Read Says: November 23, 2012 at 12:56 | Reply In that case, you probably also know the very similar one for cube roots of $N$: $x[i+1] = \displaystyle\frac{1}{3} \left( 2 x[i] + \frac{N}{ x[i]^2 } \right)$ which can also be derived from Newton-Raphson. I’ve tried doing this in rational arithmetic and of course it works, but the numbers get rather out of hand: the number of digits multiplies by three each iteration, but the accuracy (number of accurate digits in floating point) only multiplies by two. That seems like the reward is not in proportion to the effort required. • Rod Carvalho Says: November 25, 2012 at 11:04 | Reply I’ve tried doing this in rational arithmetic and of course it works, but the numbers get rather out of hand (…) That seems like the reward is not in proportion to the effort required. I have just written a post with some of my numerical experiments. I agree with you: the integers in the ratios just explode. It’s scary. I suppose that the lesson to be learned from this is: with finite machines we can do finite-precision arithmetic comfortably, but doing arbitrary-precision arithmetic is a dangerous game. 2. Simon Read Says: November 24, 2012 at 08:59 | Reply I have a few other mathematics ideas we might like to discuss, but where do we put them? 1) How do you know Newton-Raphson is converging? How do you MAKE it converge? I have discovered a test which makes it converge. 2) Power series for pi. I have discovered a very simple two-stage method which is better than any power series, (Ramanujan eat your heart out!) but not as fast as more modern quadratically-convergent methods. It might spark ideas on two-stage methods of computing functions. 3) Two-stage method of computing sine and cosine. The extra stage of computation gives you another degree of freedom which might let you optimise the accuracy/effort balance. 4) Puzzle: a goat in a circular field. Apparently this was part of a Cambridge University entrance exam for mathematics. It’s still a fun puzzle to solve. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 55, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113300442695618, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/38139/scattering-singularity
# scattering singularity In QFT when one works out the cross section between two colliding electrons one gets a formula which is proportional to $\theta^{-4}$ where $\theta$ is the scattering angle which is due to a nearly on-shell photon with momentum $\approx 0$. See for example Peskin and Schroeder pg 155. This is also seen in ordinary Rutherford scattering and apparently is not a problem experimentally as one can't detect particles along the collision beam. Also, in Rutherford scattering the singularity gets removed if one takes into account the screening of the nucleus by the electron cloud (see QM by Messiah (1961), pg 422) which modifies the Coulomb nature of the potential. I was wondering if this singularity is indicating some problem with the calculation? Is there some effect in the calculation which has been neglected which when included would remove the singularity? Could renormalization effects maybe have a similar effect to the electron cloud in the Rutherford case? - ## 1 Answer The divergence is real but does not reflect a problem. It arises because the potential has infinite range. It is similar to saying that in theory every snowball in the Andromeda galaxy is infinitesimally scattered by the sun, and it is just as meaningless because every really distant particle is more strongly perturbed by some other effect. This divergence is not observed in interactions with a finite range such as the weak and effective (nuclear) strong interactions. - Thanks, but doesn't an infinite cross-section imply an infinite number of electrons are scattered with $\theta=0$? Is it just that the electrons are scattered a very long way away from the each other? Would this singularity go away if one used wave packets rather than plane waves? – physicsphile Sep 24 '12 at 3:23 Dear @physicsphile, quite on the contrary, the infinite cross section (long range force) means that there are no electrons at all that get through at $\theta=0$: all electrons get scattered, usually with extremely small angles $\theta$. Well, the angle $\theta$ at which an electron scatters is (approximately) a decreasing, power-law function of the impact parameter, much like in classical physics. ... The singularity should't go away and doesn't go away and you should stop asking about it. It's a true formula with a clear physical, long-distance (infrared) interpretation. – Luboš Motl Sep 24 '12 at 6:09 Hi @ Luboš, sorry I don't quite understand your comment. Say we have $\frac{d\sigma}{d\Omega}d\Omega=\frac{\mbox{number of particles scattered into$d\Omega$/second} }{\mbox{number of incident/sec/area in$\rho$plane}}$ where the $\rho$ plane is perpendicular to the beam and $d\Omega$ is the area of the detector. Then if we want to work out the total number of particles scattered, don't we integrate over $\Omega$? But if so and $\sigma\propto\frac{1}{\sin(\theta/2)^4}$ wont the integral be non-convergent? – physicsphile Sep 24 '12 at 10:11 @physicsphile The thing is that your beam has finite dimensions---or at least the part of it you care about has finite dimensions. For any finite range of impact parameters the number of particles scattered into any $\mathrm{d}\Omega$ is finite. So in the real world there is no problem. – dmckee♦ Sep 25 '12 at 0:12 @dmckee sorry I don't quite follow what you are saying. Along the beam $\theta=0$ so the singular region will also contribute as far as I can see. If the singular region contributes the integral will not converge. – physicsphile Sep 25 '12 at 2:10 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405749440193176, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=Problem_of_Apollonius&diff=33162&oldid=9601
# Problem of Apollonius ### From Math Images (Difference between revisions) | | | | | |----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Current revision (12:26, 2 July 2012) (edit) (undo) | | | (31 intermediate revisions not shown.) | | | | | Line 1: | | Line 1: | | | - | {{Image Description | + | {{Image Description Ready | | | | + | |Field=Geometry | | | | + | |Field2=Fractals | | | |Image=Apollonian.jpg | | |Image=Apollonian.jpg | | | | + | |AuthorName=Paul Nylander | | | | + | |SiteName=Fractals | | | | + | |Field=Fractals | | | | + | |Field=Geometry | | | | + | |SiteURL=http://bugman123.com/Fractals/Fractals.html | | | | + | |InProgress=No | | | |ImageIntro=This an example of a fractal that can be created by repeatedly solving the Problem of Apollonius. | | |ImageIntro=This an example of a fractal that can be created by repeatedly solving the Problem of Apollonius. | | | |ImageDescElem=<div style="width:auto; position:relative;">[[Image:Apollonius.png|thumb|150px|Image by: [http://commons.wikimedia.org/wiki/File:Apollonius_solution_3B.png Wikipedia]|left]]<div style="position:relative;">The problem of Apollonius involves trying to find a circle that is tangent to three objects: points, lines, or circles in a plane. The most famous of these is the case involving three different circles in a plane, as seen in the picture to the left. The given three circles are in red, green, and blue, while the solution circle is in black.</div></div> | | |ImageDescElem=<div style="width:auto; position:relative;">[[Image:Apollonius.png|thumb|150px|Image by: [http://commons.wikimedia.org/wiki/File:Apollonius_solution_3B.png Wikipedia]|left]]<div style="position:relative;">The problem of Apollonius involves trying to find a circle that is tangent to three objects: points, lines, or circles in a plane. The most famous of these is the case involving three different circles in a plane, as seen in the picture to the left. The given three circles are in red, green, and blue, while the solution circle is in black.</div></div> | | Line 13: | | Line 21: | | | | Given three points, the problem only has one solution. In the cases of one line and two points; two lines and one point; and one circle and two points, the problem has two solutions. Four solutions exist for the cases of three lines; one circle, one line, and one point; and two circles and one point. There are eight solutions for the cases of two circles and one line; and one circle and two lines, in addition to the three circle problem. | | Given three points, the problem only has one solution. In the cases of one line and two points; two lines and one point; and one circle and two points, the problem has two solutions. Four solutions exist for the cases of three lines; one circle, one line, and one point; and two circles and one point. There are eight solutions for the cases of two circles and one line; and one circle and two lines, in addition to the three circle problem. | | | | | | | - | {{-}} | + | |ImageDesc= | | - | | + | | | - | <h1> A More Mathematical Explanation</h1> | + | | | | | | | | | There are many different ways of solving the problem of Apollonius. The few that are easiest to understand include using an algebraic method or an [[Inversion|inverse geometry]] method. | | There are many different ways of solving the problem of Apollonius. The few that are easiest to understand include using an algebraic method or an [[Inversion|inverse geometry]] method. | | - | | | | | - | {{-}} | | | | | | | | | | === Algebraic Method === | | === Algebraic Method === | | | | | | | - | {{hide|1=This method only uses math up to the level of understanding quadratic equations. We will proceed by setting up a system of quadratic equations and solving for the radius, r, of the unknown circle. | + | This method only uses math up to the level of understanding quadratic equations. We will proceed by setting up a system of quadratic equations and solving for the radius, r, of the unknown circle. | | | | | | | | We start by labeling the center of each of the given circles <math>(x</math><sub>1</sub><math>,y</math><sub>1</sub><math>)</math>, <math>(x</math><sub>2</sub><math>,y</math><sub>2</sub><math>)</math>, and <math>(x</math><sub>3</sub><math>,y</math><sub>3</sub><math>)</math>. We will call the center of the unknown circle <math>(x,y)</math>. <math>r</math><sub>1</sub>,<math>r</math><sub>2</sub>, and <math>r</math><sub>3</sub> are the different radii of each of the given circles. | | We start by labeling the center of each of the given circles <math>(x</math><sub>1</sub><math>,y</math><sub>1</sub><math>)</math>, <math>(x</math><sub>2</sub><math>,y</math><sub>2</sub><math>)</math>, and <math>(x</math><sub>3</sub><math>,y</math><sub>3</sub><math>)</math>. We will call the center of the unknown circle <math>(x,y)</math>. <math>r</math><sub>1</sub>,<math>r</math><sub>2</sub>, and <math>r</math><sub>3</sub> are the different radii of each of the given circles. | | Line 45: | | Line 49: | | | | Second minus first gives us: | | Second minus first gives us: | | | | | | | - | :*<math>2(x</math><sub>1</sub><math>-x</math><sub>2</sub><math>)x+2(y</math><sub>1</sub><math>-y</math><sub>2</sub>)<math>+2(\pm r</math><sub>1</sub><math>\pm r</math><sub>2</sub><math>)r=(x</math><sub>1</sub><math>^2+y</math><sub>1</sub><math>^2-r</math><sub>1</sub><math>^2)-(x</math><sub>2</sub><math>^2+y</math><sub>2</sub><math>^2-r</math><sub>2</sub><math>^2)</math>}} | + | :*<math> 2(x_1-x_2)x+2(y_1-y_2)y+2(\pm r_1 \pm r_2)r=(x_1^2+y_1^2-r_1^2)-(x_2^2+y_2^2-r_2^2) </math> | | | | | | | | | + | Third minus first gives us: | | | | + | :*<math> 2(x_1-x_3)x+2(y_1-y_3)y+2(\pm r_1 \pm r_3)r=(x_1^2+y_1^2-r_1^2)-(x_3^2+y_3^2-r_3^2) </math> | | | | | | | - | <h1> Apollonian Gasket </h1> | + | For the sake of simplicity, we'll define some new variables. Let | | - | {{Switch|link1=Show|link2=Hide|1=<div style="float:left; margin-right:25px"><pausegif id="2" wiki="no" border="no">Problemapollonius3.gif</pausegif></div>|2=The Apollonian gasket is an example of one of the earliest studied fractals and was first constructed by Gottfried Leibniz. It can be constructed by solving the problem of Apollonius iteratively. It was a precursor to [[Sierpinski's Triangle]], and in a special case, it forms [[Ford Circles]]. | + | | | | | + | <math> a_2=2(x_1-x_2) </math>; <math> b_2=2(y_1-y_2)</math> ; <math> c_2=2(\pm r_1 \pm r_2) </math> ; <math>d_2=(x_1^2+y_1^2-r_1^2)-(x_2^2+y_2^2-r_2^2) </math> | | | | + | | | | | + | <math> a_3=2(x_1-x_3) </math>; <math> b_3=2(y_1-y_3)</math> ; <math> c_3=2(\pm r_1 \pm r_3) </math> ; <math>d_2=(x_1^2+y_1^2-r_1^2)-(x_3^2+y_3^2-r_3^2) </math> | | | | + | | | | | + | Now our two equations can be written as | | | | + | :<math> a_2 x+b_2 y +c_2 r=d_2 </math> | | | | + | :<math> a_3 x+b_3 y +c_3 r=d_3 </math> | | | | + | | | | | + | Since this is a simple linear system of equations, we can solve it for x and y in terms of r. | | | | + | {{Switch|link1=click to show algebra |link2=click to hide algebra|1=|2= | | | | + | | | | | + | Solving the first equation for x: | | | | + | <math> a_2x=d_2-c_2r-b_2y \rightarrow x=\frac{d_2-c_2 r -b_2 y}{a_2} </math> | | | | + | | | | | + | Substituting that in to the second equation allows us to find y in terms of known values and r: | | | | + | | | | | + | <math> a_3\left(\frac{d_2-c_2 r -b_2 y}{a_2} \right)+b_3 y +c_3 r =d_3 </math> | | | | + | | | | | + | <math> a_3 d_2-a_3c_2 r -a_3 b_2 y+a_2 b_3 y +a_2 c_3 r =d_3 a_2 </math> | | | | + | | | | | + | <math> y(a_2 b_3-a_3 b_2)=a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r </math> | | | | + | | | | | + | *<math> y=\frac{a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r}{a_2 b_3-a_3 b_2} </math>. | | | | + | | | | | + | Rather than substituting back in to find x, it is actually simpler to go through the same process we used to find y to get x in terms of known values and r. | | | | + | | | | | + | First, we solve the first equation for y. | | | | + | | | | | + | <math> b_2y=d_2-c_2r-a_2x \rightarrow y=\frac{d_2-c_2 r -a_2 x}{b_2} </math> | | | | + | | | | | + | Plugging into the second equation gives us | | | | + | | | | | + | <math> a_3 x+b_3\left(\frac{d_2-c_2 r -a_2 x}{b_2}\right) +c_3 r =d_3 </math> | | | | + | | | | | + | <math> a_3 b_2 x+b_3 d_2 -b_3 c_2 r-a_2 b_3 x +b_2 c_3 r=b_2 d_3 </math> | | | | + | | | | | + | <math> x(a_3 b_2-a_2 b_3)=b_2 d_3-b_3 d_2 +(b_3 c_2-b_2 c_3) r </math> | | | | + | | | | | + | *<math> x=\frac{b_2 d_3-b_3 d_2 +(b_3 c_2-b_2 c_3) r}{a_3 b_2-a_2 b_3} </math>.}} | | | | + | | | | | + | With <math> x=\frac{b_2 d_3-b_3 d_2 +(b_2 c_3-b_3 c_2) r}{a_3 b_2-a_2 b_3} </math> and <math> y=\frac{a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r}{a_2 b_3-a_3 b_2} </math>, we plug in values for the a's, b's and c's (which we get from the original information about the centers and radii of the three circles) to calculate x and y. Using our very first equations for the circles, we can then solve for r. | | | | + | | | | | + | {{Switch|link1=click to show example|link2=click to hide example|1=|2= | | | | + | | | | | + | Let's pick three circles and find the mutually tangent ones. Let's choose some coordinates and radii for our three circles. | | | | + | | | | | + | Take <math> (x_1,y_1,r_1)=(0,3,2) </math>, <math> (x_2,y_2,r_2)=(-2,-2,1) </math> and <math> (x_3,y_3,r_3)=(3,-3,3)</math>. These three circles are shown below. | | | | + | | | | | + | [[Image:Circleex1.png|300]] | | | | + | | | | | + | Now we want to calculate the a's, b's, and d's first. | | | | + | | | | | + | <math> a_2=2(x_1-x_2)=2(0--2)=4</math> | | | | + | | | | | + | <math> a_3=2(x_1-x_3)=2(0-3)=-6</math> | | | | + | | | | | + | <math> b_2=2(y_1-y_2)=2(3--2)=10</math> | | | | + | | | | | + | <math> b_3=2(y_1-y_3)=2(3--3)=12</math> | | | | + | | | | | + | <math> d_2=(x_1^2+y_1^2-r_1^2)-(x_2^2+y_2^2-r_2^2) =(0^2+3^2-2^2)-(2^2+2^2-1^2)=(9-4)-(4+4-1)=-2 </math> | | | | + | | | | | + | <math> d_3=(x_1^2+y_1^2-r_1^2)-(x_3^2+y_3^2-r_3^2) =(0^2+3^2-2^2)-(3^2+3^2-3^2)=(9-4)-(9+9-9)=-4 </math> | | | | + | | | | | + | Calculating the c terms requires a bit more thought since there are the <math> \pm </math> signs. The choice of these signs determines which circle we are solving for. We simply must be consistent in all of our applications of signs for a given r. For the first example, let's simply take all of the plus signs. Then | | | | + | | | | | + | <math> c_2=2(r_1+r_2)=2(2+1)=6</math> | | | | + | | | | | + | <math> c_3=2(r_1+r_3)=2(2+3)=10 </math>. | | | | + | | | | | + | Now we can calculate x and y for this first circle. | | | | + | | | | | + | <math>x=\frac{b_2 d_3-b_3 d_2 +(b_3 c_2-b_2 c_3) r}{a_3 b_2-a_2 b_3}=\frac{(10)(-4)-(12)(-2)+(12(6)-10(10))r}{-6(10)-4(12)}=\frac{-16-(28)r}{-108} </math> | | | | + | | | | | + | <math> x=\frac{1}{27}(4+7r) </math>. | | | | + | | | | | + | <math>y=\frac{a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r}{a_2 b_3-a_3 b_2}=\frac{(4)(-4)-(-6)(2)+ (-6(6)-4(10))r}{12(4)-(10)(-6)}=\frac{-28-(76)r}{108} </math> | | | | + | | | | | + | <math> y=\frac{-7-19r}{27} </math>. | | | | + | | | | | + | Now we can return to one of our first equations to find r. | | | | + | | | | | + | <math> (x-x_1)^2+(y-y_1)^2=(r+r_1)^2 </math> | | | | + | | | | | + | Note that here the sign of <math> r_1 </math> is positive. That is because we took the positive sign when we solved for the c values. Plugging in values, we get | | | | + | | | | | + | <math> \left(\frac{1}{27}(4+7r)\right)^2+\left(\frac{-7-19r}{27}-3 \right)^2=(r+2)^2 </math> | | | | + | | | | | + | This equation is quadratic in r, so it can be solved using the quadratic formula (or a graphing calculator, if you prefer). When the dust clears, we get <math> r\approx 4.729 </math> (the other value that comes out of the quadratic formula does not work when plotted). | | | | + | | | | | + | Now we can plot this circle with center <math> \left(\frac{1}{27}(4+7(4.729)),\frac{-7-19(4.729)}{27} \right) </math>. It is shown below in red, with the original circles in black. | | | | + | | | | | + | [[Image:Circleex2.png|300px|center]] | | | | + | | | | | + | We can see that it is indeed tangent to the three original circles! | | | | + | | | | | + | This process can be repeated choosing different signs for the different r values in the c coefficients to find the other seven circles. | | | | + | | | | | + | }} | | | | + | | | | | + | | | | | + | | | | | + | | | | | + | == Apollonian Gasket == | | | | + | The Apollonian gasket is an example of one of the earliest studied fractals and was first constructed by Gottfried Leibniz. It can be constructed by solving the problem of Apollonius iteratively. It was a precursor to [[Sierpinski's Triangle]], and in a special case, it forms [[Ford Circles]]. | | | | | | | | <div style="float:left"><pausegif id="2" wiki="no" border="no">Problemapollonius3.gif</pausegif></div> | | <div style="float:left"><pausegif id="2" wiki="no" border="no">Problemapollonius3.gif</pausegif></div> | | Line 62: | | Line 173: | | | | Repeating this process over and over again with each set of three mutually tangent circles will create the Apollonian gasket.}} | | Repeating this process over and over again with each set of three mutually tangent circles will create the Apollonian gasket.}} | | | | | | | - | {{-}} | + | == Interactive Applet == | | | | + | {{#iframe:https://www.cs.drexel.edu/~rlw82/Canvas/ResizeCircles/|632|632|border=0}} | | | | + | <BR> | | | | + | Drag the circles around for solutions to be drawn | | | | + | Drag the red circles to resize that circle | | | | + | | | | | + | == References == | | | | + | | | | | | | | - | <h1>References</h1> | | | | | | | | | | Math Pages, [http://www.mathpages.com/HOME/kmath113.htm Apollonius' Tangency Problem] | | Math Pages, [http://www.mathpages.com/HOME/kmath113.htm Apollonius' Tangency Problem] | | Line 70: | | Line 187: | | | | MathWorld, [http://mathworld.wolfram.com/ApolloniusProblem.html Apollonius' Problem] | | MathWorld, [http://mathworld.wolfram.com/ApolloniusProblem.html Apollonius' Problem] | | | | | | | | | + | Wikibooks , [http://en.wikibooks.org/wiki/Fractals/Apollonian_fractals Apollonian fractals] | | | | | | | | | + | == Mathematica Programs == | | | | | | | | | + | [[User:AnnaP|Anna]] created several programs that solve the problem of Apollonius in the case of three non-tangent, non-intersecting circles that the user inputs. The programs were written in Mathematica 7, but are likely compatible with Mathematica 5 and/or 6. | | | | | | | - | |AuthorName=Paul Nylander | + | The first program automatically plots all eight solutions. [http://www.sccs.swarthmore.edu/users/09/anna09/mathematica/apolloniussolution.nb Click here to download this program] | | - | |SiteName=Fractals | + | | | - | |SiteURL=http://bugman123.com/Fractals/Fractals.html | + | The second program allows the user to choose which solutions to plot in groups of two. [http://www.sccs.swarthmore.edu/users/09/anna09/mathematica/apolloniussolutionoptions.nb Click here to download this program] | | - | |Field=Geometry | + | | | - | |Field2=Fractals | + | | | - | |InProgress=Yes | + | | | - | }} | + | | ## Current revision Fields: Geometry and Fractals Image Created By: Paul Nylander Website: Fractals This an example of a fractal that can be created by repeatedly solving the Problem of Apollonius. # Basic Description Image by: Wikipedia The problem of Apollonius involves trying to find a circle that is tangent to three objects: points, lines, or circles in a plane. The most famous of these is the case involving three different circles in a plane, as seen in the picture to the left. The given three circles are in red, green, and blue, while the solution circle is in black. Apollonius of Perga posed and solved this problem in his work called Tangencies. Sadly, Tangencies has been lost, and only a report of his work by Pappus of Alexandria is left. Since then, other mathematicians, such as Isaac Newton and Descartes, have been able to recreate his results and discover new ways of solving this interesting problem. Click to stop animation. The problem usually has eight different solution circles that exist that are tangent to the given three circles in a plane. The given circles must not be tangent to each other, overlapping, or contained within one another for all eight solutions to exist. Given three points, the problem only has one solution. In the cases of one line and two points; two lines and one point; and one circle and two points, the problem has two solutions. Four solutions exist for the cases of three lines; one circle, one line, and one point; and two circles and one point. There are eight solutions for the cases of two circles and one line; and one circle and two lines, in addition to the three circle problem. # A More Mathematical Explanation [Click to view A More Mathematical Explanation] There are many different ways of solving the problem of Apollonius. The few that are easiest to under [...] [Click to hide A More Mathematical Explanation] There are many different ways of solving the problem of Apollonius. The few that are easiest to understand include using an algebraic method or an inverse geometry method. ### Algebraic Method This method only uses math up to the level of understanding quadratic equations. We will proceed by setting up a system of quadratic equations and solving for the radius, r, of the unknown circle. We start by labeling the center of each of the given circles $(x$1$,y$1$)$, $(x$2$,y$2$)$, and $(x$3$,y$3$)$. We will call the center of the unknown circle $(x,y)$. $r$1,$r$2, and $r$3 are the different radii of each of the given circles. From this we are able to write our equations: • $(x - x$1$)^2 + (y - y$1$)^2 = (r \pm r$1$)^2$ • $(x - x$2$)^2 + (y - y$2$)^2 = (r \pm r$2$)^2$ • $(x - x$3$)^2 + (y - y$3$)^2 = (r \pm r$3$)^2$ Next we are able to expand each of the equations to see better how they can relate to each other. Expanding gives us: • $(x^2+y^2-r^2)-2xx$1$-2yy$1$\pm2rr$1$+(x$1$^2+y$1$^2-r$1$^2)=0$ • $(x^2+y^2-r^2)-2xx$2$-2yy$2$\pm2rr$2$+(x$2$^2+y$2$^2-r$2$^2)=0$ • $(x^2+y^2-r^2)-2xx$3$-2yy$3$\pm2rr$3$+(x$3$^2+y$3$^2-r$3$^2)=0$ We can now look at the equations and see how we can subtract them from each other. So we will take the second and third equation minus the first equation. Second minus first gives us: • $2(x_1-x_2)x+2(y_1-y_2)y+2(\pm r_1 \pm r_2)r=(x_1^2+y_1^2-r_1^2)-(x_2^2+y_2^2-r_2^2)$ Third minus first gives us: • $2(x_1-x_3)x+2(y_1-y_3)y+2(\pm r_1 \pm r_3)r=(x_1^2+y_1^2-r_1^2)-(x_3^2+y_3^2-r_3^2)$ For the sake of simplicity, we'll define some new variables. Let $a_2=2(x_1-x_2)$; $b_2=2(y_1-y_2)$  ; $c_2=2(\pm r_1 \pm r_2)$ ; $d_2=(x_1^2+y_1^2-r_1^2)-(x_2^2+y_2^2-r_2^2)$ $a_3=2(x_1-x_3)$; $b_3=2(y_1-y_3)$  ; $c_3=2(\pm r_1 \pm r_3)$ ; $d_2=(x_1^2+y_1^2-r_1^2)-(x_3^2+y_3^2-r_3^2)$ Now our two equations can be written as $a_2 x+b_2 y +c_2 r=d_2$ $a_3 x+b_3 y +c_3 r=d_3$ Since this is a simple linear system of equations, we can solve it for x and y in terms of r. [click to show algebra] [click to hide algebra] Solving the first equation for x: $a_2x=d_2-c_2r-b_2y \rightarrow x=\frac{d_2-c_2 r -b_2 y}{a_2}$ Substituting that in to the second equation allows us to find y in terms of known values and r: $a_3\left(\frac{d_2-c_2 r -b_2 y}{a_2} \right)+b_3 y +c_3 r =d_3$ $a_3 d_2-a_3c_2 r -a_3 b_2 y+a_2 b_3 y +a_2 c_3 r =d_3 a_2$ $y(a_2 b_3-a_3 b_2)=a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r$ • $y=\frac{a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r}{a_2 b_3-a_3 b_2}$. Rather than substituting back in to find x, it is actually simpler to go through the same process we used to find y to get x in terms of known values and r. First, we solve the first equation for y. $b_2y=d_2-c_2r-a_2x \rightarrow y=\frac{d_2-c_2 r -a_2 x}{b_2}$ Plugging into the second equation gives us $a_3 x+b_3\left(\frac{d_2-c_2 r -a_2 x}{b_2}\right) +c_3 r =d_3$ $a_3 b_2 x+b_3 d_2 -b_3 c_2 r-a_2 b_3 x +b_2 c_3 r=b_2 d_3$ $x(a_3 b_2-a_2 b_3)=b_2 d_3-b_3 d_2 +(b_3 c_2-b_2 c_3) r$ • $x=\frac{b_2 d_3-b_3 d_2 +(b_3 c_2-b_2 c_3) r}{a_3 b_2-a_2 b_3}$. With $x=\frac{b_2 d_3-b_3 d_2 +(b_2 c_3-b_3 c_2) r}{a_3 b_2-a_2 b_3}$ and $y=\frac{a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r}{a_2 b_3-a_3 b_2}$, we plug in values for the a's, b's and c's (which we get from the original information about the centers and radii of the three circles) to calculate x and y. Using our very first equations for the circles, we can then solve for r. [click to show example] [click to hide example] Let's pick three circles and find the mutually tangent ones. Let's choose some coordinates and radii for our three circles. Take $(x_1,y_1,r_1)=(0,3,2)$, $(x_2,y_2,r_2)=(-2,-2,1)$ and $(x_3,y_3,r_3)=(3,-3,3)$. These three circles are shown below. Now we want to calculate the a's, b's, and d's first. $a_2=2(x_1-x_2)=2(0--2)=4$ $a_3=2(x_1-x_3)=2(0-3)=-6$ $b_2=2(y_1-y_2)=2(3--2)=10$ $b_3=2(y_1-y_3)=2(3--3)=12$ $d_2=(x_1^2+y_1^2-r_1^2)-(x_2^2+y_2^2-r_2^2) =(0^2+3^2-2^2)-(2^2+2^2-1^2)=(9-4)-(4+4-1)=-2$ $d_3=(x_1^2+y_1^2-r_1^2)-(x_3^2+y_3^2-r_3^2) =(0^2+3^2-2^2)-(3^2+3^2-3^2)=(9-4)-(9+9-9)=-4$ Calculating the c terms requires a bit more thought since there are the $\pm$ signs. The choice of these signs determines which circle we are solving for. We simply must be consistent in all of our applications of signs for a given r. For the first example, let's simply take all of the plus signs. Then $c_2=2(r_1+r_2)=2(2+1)=6$ $c_3=2(r_1+r_3)=2(2+3)=10$. Now we can calculate x and y for this first circle. $x=\frac{b_2 d_3-b_3 d_2 +(b_3 c_2-b_2 c_3) r}{a_3 b_2-a_2 b_3}=\frac{(10)(-4)-(12)(-2)+(12(6)-10(10))r}{-6(10)-4(12)}=\frac{-16-(28)r}{-108}$ $x=\frac{1}{27}(4+7r)$. $y=\frac{a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r}{a_2 b_3-a_3 b_2}=\frac{(4)(-4)-(-6)(2)+ (-6(6)-4(10))r}{12(4)-(10)(-6)}=\frac{-28-(76)r}{108}$ $y=\frac{-7-19r}{27}$. Now we can return to one of our first equations to find r. $(x-x_1)^2+(y-y_1)^2=(r+r_1)^2$ Note that here the sign of $r_1$ is positive. That is because we took the positive sign when we solved for the c values. Plugging in values, we get $\left(\frac{1}{27}(4+7r)\right)^2+\left(\frac{-7-19r}{27}-3 \right)^2=(r+2)^2$ This equation is quadratic in r, so it can be solved using the quadratic formula (or a graphing calculator, if you prefer). When the dust clears, we get $r\approx 4.729$ (the other value that comes out of the quadratic formula does not work when plotted). Now we can plot this circle with center $\left(\frac{1}{27}(4+7(4.729)),\frac{-7-19(4.729)}{27} \right)$. It is shown below in red, with the original circles in black. We can see that it is indeed tangent to the three original circles! This process can be repeated choosing different signs for the different r values in the c coefficients to find the other seven circles. ## Apollonian Gasket The Apollonian gasket is an example of one of the earliest studied fractals and was first constructed by Gottfried Leibniz. It can be constructed by solving the problem of Apollonius iteratively. It was a precursor to Sierpinski's Triangle, and in a special case, it forms Ford Circles. Click to stop animation. Constructing the gasket begins with three mutually tangent circles. By solving this case of the problem of Apollonius we know that there are two other circles that are tangent to the three given circles. We now have five circles from which to start again. Repeat the process with two of the original circles and one of the newly generated circles. Again, by solving Apollonius' problem we can find two circles that are tangent to this new set of three circles. Although, we already know one of the two solutions for this set of three circles; it is the other of the three circles that we started with. Repeating this process over and over again with each set of three mutually tangent circles will create the Apollonian gasket. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. ## Interactive Applet Drag the circles around for solutions to be drawn Drag the red circles to resize that circle ## References Math Pages, Apollonius' Tangency Problem MathWorld, Apollonius' Problem Wikibooks , Apollonian fractals ## Mathematica Programs Anna created several programs that solve the problem of Apollonius in the case of three non-tangent, non-intersecting circles that the user inputs. The programs were written in Mathematica 7, but are likely compatible with Mathematica 5 and/or 6. The second program allows the user to choose which solutions to plot in groups of two. Click here to download this program
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 91, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8027055263519287, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Eulerian_path
# Eulerian path The Königsberg Bridges graph. This graph is not Eulerian, therefore, a solution does not exist. Every vertex of this graph has an even degree, therefore this is an Eulerian graph. Following the edges in alphabetical order gives an Eulerian circuit/cycle. In graph theory, an Eulerian trail (or Eulerian path) is a trail in a graph which visits every edge exactly once. Similarly, an Eulerian circuit or Eulerian cycle is an Eulerian trail which starts and ends on the same vertex. They were first discussed by Leonhard Euler while solving the famous Seven Bridges of Königsberg problem in 1736. Mathematically the problem can be stated like this: Given the graph on the right, is it possible to construct a path (or a cycle, i.e. a path starting and ending on the same vertex) which visits each edge exactly once? Euler proved that a necessary condition for the existence of Eulerian circuits is that all vertices in the graph have an even degree, and stated without proof that connected graphs with all vertices of even degree have an Eulerian circuit. The first complete proof of this latter claim was published posthumously in 1873 by Carl Hierholzer.[1] The term Eulerian graph has two common meanings in graph theory. One meaning is a graph with an Eulerian circuit, and the other is a graph with every vertex of even degree. These definitions coincide for connected graphs.[2] For the existence of Eulerian trails it is necessary that no more than two vertices have an odd degree; this means the Königsberg graph is not Eulerian. If there are no vertices of odd degree, all Eulerian trails are circuits. If there are exactly two vertices of odd degree, all Eulerian trails start at one of them and end at the other. A graph that has an Eulerian trail but not an Eulerian circuit is called semi-Eulerian. ## Definition An Eulerian trail,[3] or Euler walk in an undirected graph is a path that uses each edge exactly once. If such a path exists, the graph is called traversable or semi-eulerian.[4] An Eulerian cycle,[3] Eulerian circuit or Euler tour in an undirected graph is a cycle that uses each edge exactly once. If such a cycle exists, the graph is called Eulerian or unicursal.[5] The term "Eulerian graph" is also sometimes used in a weaker sense to denote a graph where every vertex has even degree. For finite connected graphs the two definitions are equivalent, while a possibly unconnected graph is Eulerian in the weaker sense if and only if each connected component has an Eulerian cycle. For directed graphs, "path" has to be replaced with directed path and "cycle" with directed cycle. The definition and properties of Eulerian trails, cycles and graphs are valid for multigraphs as well. An Eulerian orientation of an undirected graph G is an assignment of a direction to each edge of G such that, at each vertex v, the indegree of v equals the outdegree of v. Such an orientation exists for any undirected graph in which every vertex has even degree, and may be found by constructing an Euler tour in each connected component of G and then orienting the edges according to the tour.[6] Every Eulerian orientation of a connected graph is a strong orientation, an orientation that makes the resulting directed graph strongly connected. ## Properties • An undirected graph has an Eulerian cycle if and only if every vertex has even degree, and all of its vertices with nonzero degree belong to a single connected component. • An undirected graph can be decomposed into edge-disjoint cycles if and only if all of its vertices have even degree. So, a graph has an Eulerian cycle if and only if it can be decomposed into edge-disjoint cycles and its nonzero-degree vertices belong to a single connected component. • An undirected graph has an Eulerian trail if and only if at most two vertices have odd degree, and if all of its vertices with nonzero degree belong to a single connected component. • A directed graph has an Eulerian cycle if and only if every vertex has equal in degree and out degree, and all of its vertices with nonzero degree belong to a single strongly connected component. Equivalently, a directed graph has an Eulerian cycle if and only if it can be decomposed into edge-disjoint directed cycles and all of its vertices with nonzero degree belong to a single strongly connected component. • A directed graph has an Eulerian trail if and only if at most one vertex has (out-degree) − (in-degree) = 1, at most one vertex has (in-degree) − (out-degree) = 1, every other vertex has equal in-degree and out-degree, and all of its vertices with nonzero degree belong to a single connected component of the underlying undirected graph. ## Constructing Eulerian trails and circuits ### Fleury's algorithm Fleury's algorithm is an elegant but inefficient algorithm which dates to 1883.[7] Consider a graph known to have all edges in the same component and at most two vertices of odd degree. The algorithm starts at a vertex of odd degree, or, if the graph has none, it starts with an arbitrarily chosen vertex. At each step it chooses the next edge in the path to be one whose deletion would not disconnect the graph, unless there is no such edge, in which case it picks the remaining edge left at the current vertex. It then moves to the other endpoint of that vertex and deletes the chosen edge. At the end of the algorithm there are no edges left, and the sequence in which the edges was chosen forms an Eulerian cycle if the graph has no vertices of odd degree, or an Eulerian trail if there are exactly two vertices of odd degree. While the graph traversal in Fleury's algorithm is linear in the number of edges, i.e. O(|E|), we also need to factor in the complexity of detecting bridges. If we are to re-run Tarjan's linear time bridge-finding algorithm after the removal of every edge, Fleury's algorithm will have a time complexity of O(|E|2). A dynamic bridge-finding algorithm of Thorup (2000) allows this to be improved to $O(|E|\log^3|E|\log\log|E|)$ but this is still significantly slower than alternative algorithms. ### Hierholzer's algorithm Hierholzer's 1873 paper provides a different method for finding Euler cycles that is more efficient than Fleury's algorithm: • Choose any starting vertex v, and follow a trail of edges from that vertex until returning to v. It is not possible to get stuck at any vertex other than v, because the even degree of all vertices ensures that, when the trail enters another vertex w there must be an unused edge leaving w. The tour formed in this way is a closed tour, but may not cover all the vertices and edges of the initial graph. • As long as there exists a vertex v that belongs to the current tour but that has adjacent edges not part of the tour, start another trail from v, following unused edges until returning to v, and join the tour formed in this way to the previous tour. By using a data structure such as a doubly linked list to maintain the set of unused edges incident to each vertex, to maintain the list of vertices on the current tour that have unused edges, and to maintain the tour itself, the individual operations of the algorithm (finding unused edges exiting each vertex, finding a new starting vertex for a tour, and connecting two tours that share a vertex) may be performed in constant time each, so the overall algorithm takes linear time.[8] ## Counting Eulerian circuits ### Complexity issues The number of Eulerian circuits in digraphs can be calculated using the so-called BEST theorem, named after de Bruijn, van Aardenne-Ehrenfest, Smith and Tutte. The formula states that the number of Eulerian circuits in a digraph is the product of certain degree factorials and the number of rooted arborescences. The latter can be computed as a determinant, by the matrix tree theorem, giving a polynomial time algorithm. BEST theorem is first stated in this form in a "note added in proof" to the Aardenne-Ehrenfest and de Bruijn paper (1951). The original proof was bijective and generalized the de Bruijn sequences. It is a variation on an earlier result by Smith and Tutte (1941). Counting the number of Eulerian circuits on undirected graphs is much more difficult. This problem is known to be #P-complete.[9] In a positive direction, a Markov chain Monte Carlo approach, via the Kotzig transformations (introduced by Anton Kotzig in 1968) is believed to give a sharp approximation for the number of Eulerian circuits in a graph.[10] ### Special cases The asymptotic formula for the number of Eulerian circuits in the complete graphs was determined by McKay and Robinson (1995):[11] $ec(K_n) = 2^{(n+1)/2}\pi^{1/2} e^{-n^2/2+11/12} n^{(n-2)(n+1)/2} \bigl(1+O(n^{-1/2+\epsilon})\bigr).$ A similar formula was later obtained by M.I. Isaev (2009) for complete bipartite graphs:[12] $ec(K_{n,n}) = (n/2-1)!^{2n} 2^{n^2-n+1/2}\pi^{-n+1/2} n^{n-1} \bigl(1+O(n^{-1/2+\epsilon})\bigr).$ ## Applications Eulerian trails are used in bioinformatics to reconstruct the DNA sequence from its fragments.[13] They are also used in CMOS circuit design to find an optimal logic gate ordering.[14] ## See also • Eulerian matroid, an abstract generalization of Eulerian graphs • The handshaking lemma, proven by Euler in his original paper, showing that any undirected connected graph has an even number of odd-degree vertices • Hamiltonian path – a path that visits each vertex exactly once. • Veblen's theorem, that graphs with even vertex degree can be partitioned into edge-disjoint cycles regardless of their connectivity ## Notes 1. C. L. Mallows, N. J. A. Sloane (1975). "Two-graphs, switching classes and Euler graphs are equal in number". SIAM Journal on Applied Mathematics 28 (4): 876–880. doi:10.1137/0128070. JSTOR 2100368. 2. ^ a b Some people reserve the terms path and cycle to mean non-self-intersecting path and cycle. A (potentially) self-intersecting path is known as a trail or an open walk; and a (potentially) self-intersecting cycle, a circuit or a closed walk. This ambiguity can be avoided by using the terms Eulerian trail and Eulerian circuit when self-intersection is allowed. 3. Schrijver, A. (1983), "Bounds on the number of Eulerian orientations", Combinatorica 3 (3-4): 375–380, doi:10.1007/BF02579193, MR 729790 . 4. Fleury, M. (1883), "Deux problèmes de Géométrie de situation", Journal de mathématiques élémentaires, 2nd ser. (in French) 2: 257–261 . 5. Fleischner, Herbert (1991), "X.1 Algorithms for Eulerian Trails", Eulerian Graphs and Related Topics: Part 1, Volume 2, Annals of Discrete Mathematics 50, Elsevier, pp. X.1–13, ISBN 978-0-444-89110-5 . 6. Brightwell and Winkler, "Note on Counting Eulerian Circuits", 2004. 7. Tetali, P.; Vempala, S. (2001). "Random Sampling of Euler Tours". 30: 376–385. 8. Brendan McKay and Robert W. Robinson, Asymptotic enumeration of eulerian circuits in the complete graph, , 10 (1995), no. 4, 367–377. 9. M.I. Isaev (2009). "Asymptotic number of Eulerian circuits in complete bipartite graphs". Proc. 52-nd MFTI Conference (in Russian) (Moscow): 111–114. 10. Pevzner, Pavel A.; Tang, Haixu; Waterman, Michael S. (2001). "An Eulerian trail approach to DNA fragment assembly". Proceedings of the National Academy of Sciences of the United States of America 98 (17): 9748–9753. Bibcode:2001PNAS...98.9748P. doi:10.1073/pnas.171285098. PMC 55524. PMID 11504945. 11. Roy, Kuntal (2007). "Optimum Gate Ordering of CMOS Logic Gates Using Euler Path Approach: Some Insights and Explanations". Journal of Computing and Information Technology 15 (1): 85–92. doi:10.2498/cit.1000731. ## References • Euler, L., "Solutio problematis ad geometriam situs pertinentis", Comment. Academiae Sci. I. Petropolitanae 8 (1736), 128–140. • Hierholzer, Carl (1873), "Ueber die Möglichkeit, einen Linienzug ohne Wiederholung und ohne Unterbrechung zu umfahren", 6 (1): 30–32, doi:10.1007/BF01442866 . • Lucas, E., Récréations Mathématiques IV, Paris, 1921. • Fleury, "Deux problemes de geometrie de situation", Journal de mathematiques elementaires (1883), 257–261. • T. van Aardenne-Ehrenfest and N. G. de Bruijn, Circuits and trees in oriented linear graphs, Simon Stevin, 28 (1951), 203–217. • Thorup, Mikkel (2000), "Near-optimal fully-dynamic graph connectivity", , pp. 343–350, doi:10.1145/335305.335345 • W. T. Tutte and C. A. B. Smith, On Unicursal Paths in a Network of Degree 4. Amer. Math. Monthly, 48 (1941), 233–237.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8889759182929993, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/2507/can-i-encrypt-user-input-in-a-way-i-cant-decrypt-it-for-a-certain-period-of-tim/2519
# Can I encrypt user input in a way I can't decrypt it for a certain period of time? I run a baseball league and would like to do silent auctions for free agents. This would require teams to enter their highest bid and the highest bidder at the end of the auction period would win. Unfortunately, my league, my code, my server, I have access to the information if I choose to look at it. I'd like to not put myself in that position and have that information available. Is there any way I could encrypt the inputed data so that I can't unlock it for a specific amount of time (2 days)? Is this possible? An alternate way of doing it would involve md5'ing the users input, but that would require them to come back and 'unlock' their bid. I don't like this b/c if a player gets injured after their bid, they'll 'forget' their bid and not unlock it... I'm running LAMP/LAPP. --- UPDATE - Solution (for my case...) To answer the question of league trust, there are three factors. 1) will I cheat? the smallest issue I actually have b/c I don't think I will 2) will the league think I cheated? it doesn't matter if I do if them thinking I do causes an issue 3) if a donut is sitting on the table you can choose to not eat it, but you'd rather it not be sitting there. even if you can restrain yourself, it isn't always pleasant to do so. I'd like to think I wouldn't actually cheat, but don't want to have to restrain myself. Solution: It seems there isn't a good general solution to this problem. Several of the solutions required a fair amount of technical knowledge from the majority of the league, which unfortunately, isn't the case - but there are 2 other programmers in the league. Here's what I'm settling on: I'm extending the auction from 2 days to 3 days. I'm going to bid in the first day and md5 my bid with a salt "jibberish-text-salt-3.5M" and display that encrypted text on the auction page. I won't change my bid after that, others can bid over the next 2 days. At the end of the auction, I'll post my salted bid so the other programmers can confirm I haven't cheated. Anybody else that is concerned can learn to use md5, I feel I've done enough to keep myself honest at this point. Without being able to change my bid, and being forced to bid early, I'm at a slight disadvantage - a player could get injured, etc. To even this out, I'm instituting an option for me to appeal to the league that my bid be canceled should something huge happen (since I would wait until the last minute otherwise to prevent this from happening). The league will then vote on it. There's also the possibility that someone I'm not interested in gets nominated and then 2 days later I gain interest (promoted to Closer for instance). This can't really be accounted for, so I'm just going to risk it for the sake of the league... Thank you all for your suggestions. They didn't work for me in this exact situation, but I learned a lot in the process and was exposed to some new interesting ideas. I'm going to accept mgorven's answer as I feel the trusted 3rd party approach is the most correct general use case solution. I'm not sure what the protocol is for this... - 1 – mikeazo♦ May 2 '12 at 1:34 I had an idea that isn't answer-worthy: Cron jobs on an EC2 instance. Register a new micro instance, make the password a random string so you can't get back in after you log off once. Put the passwords in an python script that sends an email. Put the script in a cron job and log off the instance. It's an ugly hack but it's fairly low-tech. – pg1989 2 days ago It does solve the problem. Although I'd rather not have to pay for an ec2 instance. Good solution! – dan yesterday ## 6 Answers This is an interesting discussion of the problem, as well as this crypto.SE question (they seem to be called "time-lock puzzles"). The most reliable scheme is to use a trusted third party which provides a public key and only releases the private key some time later. The only scheme that doesn't require a trusted third party is to iteratively hash a random value for the time period you want, use the result to encrypt the information, and then discard the result and store the initial random value. To decrypt the data the iterative hashing needs to be performed again in order to recover the result which was used to encrypt the data, which should take the same amount of time it did before (although hashing can be sped up with GPUs or ASICs). - I think 3rd party is the way to go. I can have another member of my league generate a public key and come back to unlock with a private key. This is the only way that makes sense to me that keeps me from having the information. Thanks. – dan May 1 '12 at 22:46 Using an iterated hash is a very blunt instrument if you don't have dedicated hardware and want the code to be unlocked at a precise point in time. If the server has four cores and has to unlock 100 codes, the whole process will take at least 25 times longer than it would take to unlock a single code. But if you want guarantees that the server won't peek at the contents in advance, you can't account for that. The result: If the first client places his bid one week in advance, the results will not be due until six months after the auction closes. – Henrick Hellström May 2 '12 at 6:32 @mgorven: The RSA puzzle certainly seems like a different (and better) scheme that doesn't require a trusted third party. – Ricky Demer May 2 '12 at 7:17 No. A computer can imagine it is any time arbitrarily; there is nothing intrinsic about any particular time. You could have it use really weak encryption so it would take 2 days to crack, and then just intentionally lose the private key, but you can't guarantee it will take 2 days (it could be less or more in that case) and that is a very unreliable solution. - No there is no way. The only thing I can imagine would be storing the cryptokey on a flashdrive for example and storing that flashdrive in a timelocked safe. Edit For example generate a public/private key pair. Store the private key on the flashdrive and put in the timelock while using your public key to encrypt the results. - 1 You might want to expand a bit on this answer. I suspect you are suggesting some kind of PKI setup where bids submission would include encrypting against a public key, and then the message being stored for later retrieval by the un-available private key? – Zoredache May 1 '12 at 22:39 I've seen only one (ugly) implementation of a solution to your problem, and even that was not very user friendly: 1. Bidders set their bid, and the client-side function (javascript or something in your case) encrypts (symemetrically, with user-chosen one-time keys) their bidding offer, and posts it to your page (publicly visible, so everyone can see the encrypted message, and verify it was posted on time). 2. After the bidding time is over, all bidders provide their keys, and everyone can decrypt their messages, and see the offers. 3. Since everyone saw the encrypted offers, they know they were posted on time, only the bidder knows the bidding ammounts, since only he has the key, and after the bidding time is over, everyone can use the key and verify the bids. As I said, it was an ugly solution, which I saw written only once, as a proof-of-concept, which has many problems (what happens if the some bidders dont give their key, are their bids discarded, etc.). There is also a problem of collisions (same crypto-text, different bid values, depending on the key, but this is extremely improbable). - You could use something like ssss. It will allow you to generate an arbitrary number of keys for a single piece of data and allow you to specify how many are needed to decrypt the data. This is called secret sharing. That way you can't decrypt it by yourself, but it also doesn't require that any particular person comply. In your situation, it would make sense to me to generate as many keys as there are people and require some large number of them to decrypt the data. This has the failing that the users could conspire to decrypt the data. It would be cool if you could require that one of them must be the server's key, but I don't think that feature exists in ssss. (It might in some other implementation.) You could probably fake this by using some cheap symmetric encryption on the data first before reencrypting it with ssss. - – Ilmari Karonen May 2 '12 at 12:11 I think your idea of having bidders commit to a bid by publishing a hash of it is a good starting point.1 However, as you note, it has the problem that bidders can effectively withdraw their bid by refusing to reveal it at the end of the auction. There are a number of ways to address this issue: 1. Just accept it and let bidders withdraw bids. This may cut into your profits in cases where the value of the resource being auctioned changes during the auction, but it may also make bidders happier and likely to bid less conservatively. (However, to make this work, you really should ensure that each bidder can only have one standing bid at any time, otherwise you effectively turn your blind auction into a public auction conducted at the reveal stage.) 2. Impose a penalty on refusing to reveal bids. For example, you could impose a rule that anyone unable or unwilling to reveal their bid must pay a sum equal to ($x$ times) the winning bid but gets nothing in return. This will not completely eliminate the incentive to withdraw extremely excessive bids, but it does provide a strong discouragement. 3. OK, you said you wanted a crypto solution. Here's one that could work, as long as the participants can communicate securely between each other: have each participant use Shamir's secret sharing (or any other $k$-out-of-$n$ secret sharing scheme) to split their bid into shares and distribute one share to all $n$ participants. Once the bidding is over, any group of $k$ participants may pool the shares they received and thus reveal the bids. (One nice feature of Shamir's scheme is that only the threshold size $k$ needs to be fixed in advance; it's easy to generate more shares as more participants join in.) If you don't want to allow a group of $k$ colluding bidders to be able reveal the bids before you decide the bidding is over, you can have the bidders generate a random number (in the field over which you're using Shamir's algorithm), send it to you and share their bid XOR this random number instead (or vice versa; it makes no difference). This is essentially a two-level secret sharing scheme, with a simple 2-out-of-2 scheme (XOR) overlaid on top of a $k$-out-of-$n$ scheme (Shamir's). I would also recommend combining this solution with the hash-based commitment scheme, so that bidders both commit to a bid and share it. This prevents other participants from trying to alter others' bids by changing their shares, and also prevents the bidder from trying to alter their own bid by publishing inconsistent shares. (Shamir's scheme does allow some limited detection of such tricks, as long as sufficiently many participants are honest, but combining it with a commitment scheme makes it much easier and more reliable.) It also lets each bidder verifiably reveal at least their own bid, even if the required quorum of $k$ participants fails to show up and pool their shares. 1 It should be noted that simply hashing the bid amount is not a good commitment scheme — an attacker could just calculate hashes for all likely amounts and see which one matches the bid. A better solution would be to hash a string containing the bid amount, a bidder ID and a large random number (of, say, 128 bits); for the secret sharing based solution, this string should also be the shared secret. Also, don't use MD5, use something more secure like SHA-256. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9692603945732117, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/171091-differentiation-banach-space.html
# Thread: 1. ## Differentiation in Banach Space Hello everyone, I am having some troubles with the following problem: Find a derivative (Fréchet derivative) of the operator $F:C[0,\pi]\to C[0,\pi], F(u)=\sin u(x),$ at the point $u_0(x) = \cos(x).$ There is an answer to this problem in the book: $F'(u_0)=\cos\sin x$, but I can't get it: I tried using mean value theorem, Taylor series and different trigonometry formulas to find the linear part of the increment, but I still don't understand how can we get an answer $\cos\sin x$ here. I also don't thinks its necessary to find Gateaux derivative here, which is equal to Frechet derivative in this case. There is also a similar problem with $F(u)=\cos u(x), u_0(x) = \sin(x)$ with an answer $\sin\cos x$ in this book, so I think this is not any mistake or something. Can anyone please help with this? Thanks in advance.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9674002528190613, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/32556/question-regarding-the-bohm-interpretation
# Question regarding the Bohm interpretation I tryed to understand the Bohm interpretation and this is what picture appeared to me. Please tell me if I understood something incorrectly. • All particles have definite positions and follow deterministic rules of dynamics • Any future configuration of an isolated subsystem is only dependent on initial conditions • Even slight difference in initial conditions may result in huge differences in the result. The problem is that those initial conditions, are inherently unknown. This is fundamental: even if an observer manages to measure the whole Bohm state of the entire universe, he still would not know the Bohm state of himself. This is like making predictions bout future states of a three-body system based on Newtonian mechanics with initial coordinates known only with finite precision. Due to apparent chaoticity of the solution the possible results may be dramatically different. Correct me if I am wrong. - This is correct. I don't see a question however. It's a little worse than what you say--- the Bohm particle positions are inherently unobservable because all you can see is the Everett branch they select to make real, and that's not enough information to determine anything at all about the positions, beyond a vague probability distribution as $|\psi|^2$. – Ron Maimon Jul 22 '12 at 9:54 As I understand there are no Everett branches in Bohm interpretation. – Anixx Jul 22 '12 at 12:18 There is a full QM wavefunction in Bohm, so there are branches. The branches are always there in QM, the only question is whether you consider the unobserved branch "real" or "unreal" (which is positivistically meaningless). The only difference in Bohm is that one of the branches has particles running around, this is the real branch, and the other branches are empty, so they are unreal. So real/unreal is determined dynamically. But they are still there in the formalism to reproduce the interference from far-away branches recohering. You can see this by Bohmian simulation of Shor's algorithm. – Ron Maimon Jul 22 '12 at 19:32 Where would I find a published account of the Bohmian simulation of Shor's algorithm? That sounds interesting. – Francis Davey Feb 3 at 12:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369465708732605, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/32287/does-the-stepwisefit-function-in-matlab-handle-correlation-between-the-factors
# Does the stepwisefit function in MATLAB handle correlation between the factors? I have been told to run a factor analysis using the stepwisefit function in MATLAB. Basically, this function helps you fit a model composed of $T$ factors $F=(f_1, ... , f_T)$ each of which have $N$ values with a target vector $y$ with the same length. I know for a fact that my factors are correlated, and I was wondering if this function was assuming that the factors are uncorrelated or not... Should I still use this function or, if it's not the case, which function/method should I choose to perform my analysis? - Strange. I don't see any reference to factor analysis under the link. It reads that `stepwisefit` is stepwise regression analysis – ttnphns Jul 14 '12 at 14:30 Maybe my wording is wrong, I understand factor analysis as a multiple regression over different factors. – SRKX Jul 14 '12 at 14:31 I can't know what you might intend. You can see definition of "factor analysis" by pointing on the tag (or read in Wikipedia). Does its meaning fit your case? – ttnphns Jul 14 '12 at 14:35 Yes, it absolutely does. Basically, I have a lot of underlying factors and I would like to know which of them are really meaningful. I would then like to find the right weight to the remaining factors to find my optimal fit. – SRKX Jul 14 '12 at 14:41 @SRKX, why don't you simply drop the correlated factors from further analysis? It is easy to do. Moreover, stepwisefit will do this for you. – Paul Jul 14 '12 at 15:54 ## 1 Answer You should use the method "Elastic Net", instead of the Stepwise method. You can put the highly correlated predictors in your multiple linear regression, however, you won't get accurate results from your stepwise analysis unless you regularize your regression. The regularized linear regression method is "lasso" in Matlab. When you are using that, you should set the alpha to something between 0 to 1, in order to use the Elastic Net method, instead of the default Lasso method. That is because you have predictors which are correlated, and you would like your regression model to consider this fact. - default
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582314491271973, "perplexity_flag": "middle"}
http://matthewkahle.wordpress.com/2010/06/02/
# Coloring the integers Here is a problem I composed, which recently appeared on the Colorado Mathematical Olympiad. If one wishes to color the integers so that every two integers that differ by a factorial get different colors, what is the fewest number of colors necessary? I might describe a solution, as well as some related history, in a future post. But for now I’ll just say that Adam Hesterberg solved this problem at Canada/USA Mathcamp a few summers ago, claiming the \$20 prize I offered almost as soon as I offered it. At the time, I suspected but still did not know the exact answer. Although the wording of the problem strongly suggests that the answer is finite, I don’t think that this is entirely obvious. Along those lines, here is another infinite graph with finite chromatic number. If one wishes to color the points of the Euclidean plane so that every two points at distance one get different colors, what is the fewest number of colors necessary? This is one of my favorite unsolved math problems, just for being geometrically appealing and apparently intractably hard. After fifty years of many people thinking about it, all that is known is that the answer is 4, 5, 6, or 7. Recent work of Shelah and Soifer suggests that the exact answer may depend on the choice of set theoretic axioms. This inspired the following related question. If one wishes to color the points of the Euclidean plane so that every two points at factorial distance get different colors, do finitely many colors suffice? More generally, if $S$ is a sequence of positive real numbers that grows quickly enough (say exponentially), and one forbids pairs points at distance $s$ from receiving the same color, one would suspect that finitely many colors suffice. On the other hand, if $S$ grows slowly enough (say linearly), one might expect that infinitely many colors are required. Posted in expository Tagged combinatorics # Mathematical art Blog at WordPress.com. | Theme: Dusk To Dawn by Automattic.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9580878615379333, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/17247?sort=oldest
## Set comprehension when the condition is false ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The Cartesian product of two empty sets is the singleton set $\{ () \}$ containing the empty tuple. So, given a set $A$ which is empty, $A \times A$ is defined as: $$A \times A = \{ (a,a) \mid a \in A \} = \{ () \}$$ Now, does that mean that $()$ satisfies the condition $a \in A$? And if so, why don't we include the empty tuple in the Cartesian product of non-empty sets? (It would be nice if you point out which concept I mis-understand: the set comprehension, or the tuple.) Thanks in advance. [edit: I should add the following link: Wikipedia: Empty_product#Nullary_Cartesian_product] - 7 If $A = \emptyset$ is empty, then A x B is empty for any B. Note that the empty set is NOT equal to {$\emptyset$} – Jason DeVito Mar 6 2010 at 0:12 7 I believe you're confusing "the empty product" with "the product of empty sets." The former is a singleton but the latter is empty. – François G. Dorais♦ Mar 6 2010 at 0:35 1 I don't think this question is suitable for MO. – Andrea Ferretti Mar 6 2010 at 1:19 1 The first equality in your equation is true, but for reasons probably too confusing for your own good, the second equality, on the other hand, is false. – Mariano Suárez-Alvarez Mar 6 2010 at 3:26 ## 1 Answer You're misinterpreting the cartesian product. Your link to La Wik describes the cartesian product of no sets, i.e. the zero-th cartesian power of any set $A$. This is isomorphic to the space of functions from the empty set to $A$, which contains the empty function. The cartesian product of two empty sets is the cartesian square of the empty set, which is empty. - I now see where is my confusion. Thanks – M.S. Mar 6 2010 at 20:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8874295353889465, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/46798/the-meaning-of-imaginary-time/46846
# The meaning of imaginary time What is imaginary (or complex) time? I was reading about Hawking's wave function of the universe and this topic came up. If imaginary mass and similar imaginary quantities do not make sense in physics, why should imaginary (or complex) time make sense? By imaginary I mean a multiple of $i$, and by complex I mean having a real and an imaginary part i.e. $\alpha + i\beta$, where $\alpha, \beta \in {\mathbb R}$. - I've heard about rotating integrals over time into the complex-time domain. I don't know if complex time would have a physical interpretation or if its just a calculation tool. Or if your complex time and my complex time are even the same thing. Good question. – Todd R Dec 14 '12 at 2:27 I sometimes think imaginary time should be space and complex time should mean a combination of space and time twisted together... Thanks to relativity. I speculate this since in the metrics of relativity, the signs of space and time components are opposite, so taking their square roots would... – namehere Dec 14 '12 at 11:40 ## 3 Answers The easiest way to see imaginary time used is in elementary quantum mechanics in one dimension. (This is the explanation cribbed from wikipedia). Suppose we're looking at a tunneling-through-a-barrier problem. We start with the Schrodinger equation: $$-\frac{\hbar^2}{2m}\frac{d^2\psi(x)}{dx^2}+V(x)\psi(x) = E\psi(x)$$ Make the ansatz $$\psi(x) = \psi_0 exp(\frac{i}{\hbar}S(x))$$ Then we get $$-\frac{i\hbar}{2m}\frac{d^2S(x)}{dx^2}+\frac{1}{2m}(\frac{dS(x)}{dx})^2+V(x)-E=0$$ which is nonlinear. We can make progress with an $\hbar$ expansion $$S(x)=S_0(x)+\hbar S_1(x)+\frac{\hbar^2}{2}S_2(x)+...$$ After long calculation we can caculate various amplitudes and derive things like the barrier tunneling coefficient $$T=exp(\frac{2}{\hbar}Im(S))$$ where $$Im(S)=\int_a^b |p(x)|dx$$ ($p(x)=\sqrt{2m(E-V)}$) and a and b are the x values where the potential function is such that $E<V(x)$. Now Feynman offers another way to approach this, namely that the amplitude to get from x=a to x=b is just $$\langle x=b|exp(\frac{iHt}{\hbar})|x=a \rangle = \int \mathcal{D}[x(t)] exp(\frac{iS[x(t)]}{\hbar}) \ \ \ (1)$$ where the integral is over the space of classical paths $x(t)$ with the right endpoints. Now this, although very elegant, is extremely hard to compute: it's an integral over an infinite dimesional space after all! The imaginary time trick works as follows: You just make a change of variable $$t=-i\tau$$ then the action $$S(x(t))=\int{(\frac{1}{2}}m ({\frac{dx}{dt}})^2-V(x)) dt$$ becomes $$S(x(\tau))=i\int{(\frac{1}{2}}m ({\frac{dx}{d\tau}})^2+V(x)) d\tau$$ so the potential energy has swapped sign relative to the kinetic energy (and we picked up an overall i factor). Defining $$S_E(x(\tau))=\int{(\frac{1}{2}}m ({\frac{dx}{d\tau}})^2+V(x)) d\tau$$, our path integral is now $$\langle x=b|exp(\frac{-H\tau}{\hbar})|x=a \rangle = \int \mathcal{D}[x(\tau)] exp(\frac{-S_E[x(\tau)]}{\hbar})\ \ (2)$$ Now the integral will be dominated by classical paths which extremize this action. Whereas an extremal path contributing to (1) would require imaginary energy to tunnel through the potential, which looks like a hill, for (2), the potential hill is now a valley and the corresponding extremal case is just that of a ball rolling down one side of the valley and up the other. Having done your computation in Euclidean space, you then proceed by taking whatever answer you got, and rotating back to Minkowski space. So much for mechanics. You can do the same trick in field theory, where your path integral is now over classical field configurations. The Euclidean space extremal field configurations are called instantons. Now in your question, Hartle and Hawking were interested in what the equivalent, for the initial conditions of the universe, of "x=a" in our simple example is. Just like in the QM example, they were working in Euclidean time, and wanted their equivalent of "x=b" to be a de Sitter universe. Their guess was that, in the path integral, they should include all Euclidean metrics for spaces with no boundary. Just as our Euclidean extremal paths satisfy the equations of classical mechanics in Euclidean time, so the metrics included in the quantum cosmology path integral would satisfy the classical Euclidean signature Einstein equations. So to summarize, Euclidean time is a clever trick for getting answers to extremely badly behaved path integral questions. Of course in the Planck epoch, in which the no-boundary path integral is being applied, maybe Euclidean time is the only time that makes any sense. I don't know - I don't think there's any consensus on this. - I will add to twistor59 answer. Hawking liked the concept of imaginary time $\tau=\mathrm{i}t$ because it transforms a Lorentzian metric $$ds^2 = -c^2 dt^2 + dx^2 + dy^2 + dz^2$$ into a four dimensional like Euclidean metric $$ds^2 = +c^2 d\tau^2 + dx^2 + dy^2 + dz^2$$ Hawking and others believed that a quantum gravity theory could be developed in this way. This approach was named the "Euclidean path integral approach" to quantum gravity or simply "Euclidean quantum gravity". Hawking views are summarized in J. B. Hartle and S. W. Hawking, "Wave function of the Universe" Phys. Rev. D 28 (1983) 2960–2975. This old approach does not work because many difficulties and limitations arise with it. Notice that although imaginary times are sometimes used as a trick to simply some mathematical computations in statistical mechanics and quantum field theory, they do not have any physical meaning. - Another way to look at it is to imagine that time is a curved dimension, in sense it will be cyclic. To visualize that imagine a plane of two dimensions, then the third usual dimension will be a perpendicular line to this plane. Now consider this line to be bent in a circle, thus this dimension will go around and around, in sense it will have maximum value after which you will come back. For information, is approach called Compactification, and if you familiar with complex numbers, remember that purely imaginary exponential is cyclic. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937346339225769, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/7191/statics-question-when-do-i-need-to-find-support-reactions?answertab=active
Statics question: When do I need to find support reactions? We're finding forces (and whether they're in tension or compression) using the Method of Joints and Method of Sections. I don't understand why sometimes it's necessary to find the support reactions, but sometimes it's not. Please help. Thank you! - 2 My guess is that nobody's answered yet (after 9 hours) because we don't understand the engineering jargon. I don't, at least. If you can find a way to translate to more physicsy language we might be able to help more. – Mark Eichenlaub Mar 19 '11 at 7:02 I agree with Mark. A brief description of a specific example would probably do the trick. – Ted Bunn Mar 19 '11 at 14:44 Sincerely, and with all due respect to the people here that know far more than I do, I must say that if someone doesn't understand the question, I probably would rather wait for someone who is familiar with the terminology. I'm sure you can understand. I accept the possibility that this question may be outside the scope of this site, and I appreciate the comments. – ChrisC70 Mar 19 '11 at 17:27 1 Answer I have examined some material on Statics and having recently discovered this unanswered question I can provide some indications from a physics perspective of what is going on. The fundamental issue in statics is to provide the values for the forces acting on a rigid construction (usually of rods) with everything in equilibrium. (The examples I have looked at are 2 dimensional, so I will assume that in what follows; but one could add similar equations for 3 D although the essence of the OP question can be discussed in 2D.) The equations available are from the Newtonian conservation laws: $\Sigma F_x = 0$ - the sum of forces in the x-axis is zero $\Sigma F_y = 0$ - the sum of forces in the y-axis is zero $\Sigma M_a = 0$ - the moment (torque) about a point (hence any point) is zero So these equations will be true at any point, especially connection points on the construction (often called a "truss"). The variables that form in these equations are the Forces at given points on the system: more accurately the x- and y- components of the Forces. The system (truss) will be placed on a ground at N points; at each of these points we have a "support reaction" onto the truss. Thus there will be N unknown support reactions (in the y-axis say) to find in principle: labeled $V_1$, $V_2$,....,$V_N$ say. There will also likely to be some form of external force in the problem. This external Force $F$ is normally assumed applied at a specific point on the structure, and may have a horizontal (x-axis) component and a downward vertical (-ve y-axis component). I should remark here that in a wider class of physics and engineering problems the force F is assumed to be distributed across the structure and not located just at a point: the primary example of such a force would be gravity. However the mathematical techniques to solve this would involve calculus and so are outside of the basic Statics theory I have seen and that is covered here. So assume a single force is acting at a point externally. Can this problem be solved for all the forces? Well it turns out that there is a mathematical problem here, which I shall explain with some simple examples. Example 1: Beam with 2 supports and external force Let the length of the beam be L units, supports (A,B) at each end and an external force F acting purely vertically on the beam at a point distance $a$ from A, $b$ from B (hence $L=a+b$). Then we have to solve (signs are important in general): [1] $\Sigma F_y$ = $V_A + V_B - F = 0$ [2] $\Sigma M_A$ = $aF - (a + b) V_B = 0$ Here $F$, $a$ and $b=L-a$ are known and $V_A$ and $V_B$ are unknown. The key point here is that we have two equations with two unknowns. The solution is easily seen to be: $V_A = (b/a+b) F = (b/L)F$ $V_B = (a/a+b) F = (a/L)F$ So this is called a statically determinate system. However the simplest of modifications can result in a statically indeterminate system. Example 2: Beam with 3 supports I wont go through the details which are in this Wikipedia article. The point is that there are the same equations except that there is now a third reaction force $V_C$. Therefore there are too many variables and this problem cannot be solved (by Statics alone). In a sense this is quite a general case. However the engineering challenge remains to model the forces on a connected truss. The Method of Joints does this going through the Truss iteratively: at each connection point with just a few forces and variables such that the problem is determinate and such that a new unknown force is determined. Eventually by repeated application of this technique all the forces on all the beams of the truss can be determined. A different question can always be asked about any physical system however: namely what are the forces acting on a particular point? This problem does not always need the full solution of the forces acting on every other point determined before it can be answered. So the idea is to construct a Free Body Diagram which extracts the key physics (in this case Statics) of the problem. Doing so in Static Truss like problems is called the Method of Sections. The key equations are the Newton equations of Force and Torque used in Statics, but applied in a selective way. This technique of determining where to "cut" the truss looks like a bit of an art, although there do seem to be some principles used in the Tutorial material I have seen. Here is one such Trusses - Method of Sections . -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.956710696220398, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/251511/is-the-map-linear?answertab=oldest
# Is the map linear? $$\begin{bmatrix}a&b\\\\c&d\end{bmatrix}\mapsto \begin{bmatrix}ad-bc\\\\0\\\\0\end{bmatrix}$$ If it is linear I need to find a basis for the kernel and image but I am struggling to do this so i don't think it's linear but I have no idea why. - ## 3 Answers It's not linear because: $\begin{bmatrix}1&0\\\\0&0\end{bmatrix}$$\mapsto$ $\begin{bmatrix}0\\\\0\\\\0\end{bmatrix}$ $\begin{bmatrix}0&0\\\\0&1\end{bmatrix}$$\mapsto$ $\begin{bmatrix}0\\\\0\\\\0\end{bmatrix}$ $\begin{bmatrix}1&0\\\\0&1\end{bmatrix}$$\mapsto$ $\begin{bmatrix}1\\\\0\\\\0\end{bmatrix}\neq \begin{bmatrix}0\\\\0\\\\0\end{bmatrix}+\begin{bmatrix}0\\\\0\\\\0\end{bmatrix}$ - 1 I undeleted this because it is a correct answer. If you deleted it for another reason, let me know. – robjohn♦ Dec 5 '12 at 16:46 Let $X$ be a $2\times 2$ matrix. Call your map $F(X)$. Is $F(2X) = 2F(X)$? - so it is quite easy to see the kernel of $F$, you just have to look for $M \in M_2$ so that $det(M)=0$ ... - 4 It looks like the answer is missing a beginning. – Dan Shved Dec 5 '12 at 13:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8666092157363892, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/110766-correct-use-summation-sign.html
# Thread: 1. ## Correct use of summation sign Hi! I just need to be sure I have used the summation sign correctly. That is: $\sqrt{\sum_{i\not=j}^{n} A_{ij}^{2}}$ Correct me if I'm wrong, but this - when applied to a n x n matrix A - would be the square root of the sum of all individual elements squared, except the elements on the diagonal (where i = j); essentially the "length" of all elements (again without the diagonal). It's probably very easy, but I just want to make sure I'm not mistaken! Thank you in advance! 2. Originally Posted by Jodles Hi! I just need to be sure I have used the summation sign correctly. That is: $\sqrt{\sum_{i\not=j}^{n} A_{ij}^{2}}$ Correct me if I'm wrong, but this - when applied to a n x n matrix A - would be the square root of the sum of all individual elements squared, except the elements on the diagonal (where i = j); essentially the "length" of all elements (again without the diagonal). It's probably very easy, but I just want to make sure I'm not mistaken! Thank you in advance! It's fine but usually $A_{ij}$ denotes the minors of the matrix A, so I'd rather write a non-capital a: $\sqrt{\sum_{i\not=j}^{n} a_{ij}^{2}}$ Tonio 3. Thank you, Tonio!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.925478458404541, "perplexity_flag": "head"}
http://xianblog.wordpress.com/2011/06/26/normal-tail-precision/
# Xi'an's Og an attempt at bloggin, from scratch… ## Normal tail precision In conjunction with the normal-Laplace comparison mentioned in the most recent post about our lack of confidence in ABC model choice, we have been working on the derivation of the exact Bayes factor and I derived an easy formula for the marginal likelihood in the Laplace case that boils down to a weighted sum of normal probabilities (with somehow obvious notations) $m(x_1,\ldots,x_n)=2^{-n/2}\,\sum_{i=0}^n\,e^{\sqrt{2}\sum_{j=1}^i x_j- \sqrt{2}\sum_{j=i+1}^n x_j+(n-2i)^2\sigma^2}$ $\qquad\qquad\times \left[ \Phi(\{x_{i+1}-\sqrt{2}(n-2i)\sigma^2\}/\sigma) - \Phi(\{x_{i}-\sqrt{2}(n-2i)\sigma^2\}/\sigma) \right]$ I then wrote a short R code that produced this marginal likelihood. ```# ABC model comparison between Laplace and normal # L(mu,V2) versus N(mu,1) with a prior N(0,2*2) nobs=21 nsims=500 sqrtwo=sqrt(2) marnor=function(smpl){ -0.5*nobs*log(2*pi)-0.5*(nobs-1)*var(smpl)+0.5*log(1+1/(4*nobs))-0.5*mean(smpl)^2/(4+1/nobs)} marlap=function(sampl){ smpl=sort(sampl) S=sum(smpl) S=c(S,S-2*cumsum(smpl)) phi=pnorm((smpl-sqrtwo*4*(nobs-2*(1:nobs)))/2) phip=pnorm((smpl-sqrtwo*4*(nobs-2*(1:nobs)+2))/2) Dphi=log(c(phip[1],phip[-1]-phi[-nobs],1-phi[nobs])) -0.5*nobs*log(2)+log(sum(exp(-sqrtwo*S+4*(nobs-2*(0:nobs))^2+Dphi))) } ``` When checking it with an alternative Monte Carlo integration, Jean-Michel Marin spotted a slight but persistent numerical difference: ```> test=sample(c(-1,1),nobs,rep=TRUE)*rexp(nobs,sqrt(2)) > exp(marlap(test)) [1] 3.074013e-10 > f=function(x){exp(-sqrt(2)*sum(abs(test-x)))*2^(-nobs/2)} > mean(apply(as.matrix(2*rnorm(10^6)),1,f)) [1] 3.126421e-11 ``` And while I thought it could be due to the simulation error, he persisted in analysing the problem until he found the reason: the difference between the normal cdfs in the above marginal was replaced by zero in the tails of the sample, while it contributed in a significant manner, due to the huge weights in front of those differences! He then rewrote the marlap function so that the difference was better computed in the tails, with a much higher level of agreement! ```marlap=function(test){ sigma2=4 lulu=rep(0,nobs-1) test=sort(test) for (i in 1:(nobs-1)){ cst=sqrt(2)*(nobs-2*i)*sigma2 if (test[i]<0) lulu[i]=exp(sqrt(2)*sum(test[1:i])-sqrt(2)*sum(test[(i+1):nobs])+ (nobs-2*i)^2*sigma2+pnorm((test[i+1]-cst)/sqrt(sigma2),log=TRUE)+ log(1-exp(pnorm((test[i]-cst)/sqrt(sigma2),log=TRUE)- pnorm((test[i+1]-cst)/sqrt(sigma2),log=TRUE)))) else lulu[i]=exp(sqrt(2)*sum(test[1:i])-sqrt(2)*sum(test[(i+1):nobs])+ (nobs-2*i)^2*sigma2+pnorm(-(test[i]-cst)/sqrt(sigma2),log=TRUE)+ log(1-exp(pnorm(-(test[i+1]-cst)/sqrt(sigma2),log=TRUE)- pnorm(-(test[i]-cst)/sqrt(sigma2),log=TRUE)))) if (lulu[i]==0) lulu[i]=exp(sqrt(2)*sum(test[1:i])-sqrt(2)*sum(test[(i+1):nobs])+ (nobs-2*i)^2*sigma2+log(pnorm((test[i+1]-cst)/sqrt(sigma2))- pnorm((test[i]-cst)/sqrt(sigma2)))) } lulu0=exp(-sqrt(2)*sum(test[1:nobs])+nobs^2*sigma2+ pnorm((test[1]-sqrt(2)*nobs*sigma2)/sqrt(sigma2),log=TRUE)) lulun=exp(sqrt(2)*sum(test[1:nobs])+nobs^2*sigma2+ pnorm(-(test[nobs]+sqrt(2)*nobs*sigma2)/sqrt(sigma2),log=TRUE)) 2^(-nobs/2)*sum(c(lulu0,lulu,lulun)) } ``` Here is an example of this agreement: ```> marlap(test) [1] 5.519428e-10 mean(apply(as.matrix(2*rnorm(10^6)),1,f)) [1] 5.540964e-10 ``` ### Share: This entry was posted on June 26, 2011 at 12:11 am and is filed under R, Statistics, University life with tags ABC, Bayesian model choice, Laplace distribution, normal tail, pnorm, R. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### One Response to “Normal tail precision” 1. [...] my earlier posts on the revision of Lack of confidence, here is an interesting outcome from the derivation of the exact marginal likelihood in the Laplace case. Computing the posterior probability of a normal model versus a Laplace model [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9386439919471741, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=4228946
Physics Forums ## Perpetual acceleration of the universe expansion? There is a consensus on a slight acceleration of the expansion in our epoch, mainly from supernovae Ia measurements, but is there any evidence (apart from results from more distant supernovae) allowing to rule out the possibility that the universe had been expanding eternally with a tiny acceleration? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member By characterizing the detailed structure of the cosmic microwave background fluctuations, WMAP has accurately determined the basic cosmological parameters, including the Hubble constant. The current best direct measurement of the Hubble constant is 73.8 km/sec/Mpc (give or take 2.4 km/sec/Mpc including both random and systematic errors), corresponding to a 3% uncertainty. Using only WMAP data, the Hubble constant is estimated to be 70.0 km/sec/Mpc (give or take 2.2 km/sec/Mpc), also a 3% measurement. This assumes that the universe is spatially flat, which is consistent with all available data. This measurement is completely independent of traditional measurements using Cepheid variables and other techniques. However, if we do not make an assumption of flatness, we can combine WMAP data with other cosmological data to get 69.3 km/sec/Mpc (give or take 0.8 km/sec/Mpc), a 1% solution that combines different kinds of measurements. After noting that independent This is cut and paste from the following http://map.gsfc.nasa.gov/universe/uni_expansion.html Not sure I would call that expansion slight lol. Not sure on early universe measurements though they have confirmed that expansion was slower after the initial rapid expansion of the early universe Quote by Mordred they have confirmed that expansion was slower after the initial rapid expansion of the early universe Thank you, but the different and consistent measurements of the Hubble parameter only determine the present rate of expansion, without saying nothing about the past or the future rates (in fact similar figures were obtained before SNe Ia observations ruling out deceleration at present)...Concerning the above quote, this is the commonly accepted model but ¿How did they observationally confirm the past deceleration of expansíon? Recognitions: Science Advisor ## Perpetual acceleration of the universe expansion? The predictions of big bang nucleosynthesis are sensitively dependent on the expansion rate of the universe, which is determined in Friedmann cosmology (the modern concordance model) by the dominant source of energy (relativistic matter, nonrelativistic matter, vacuum) in the universe at the time. Following the big bang (and/or inflation), the universe was dominated by radiation (which is considered relativistic matter) and expanded as a power law. As the universe cooled, nonrelativistic matter began to dominate. At about this time, the CMB decoupled from the relativistic plasma -- so the very existence and subsequent decoupling of the CMB is evidence that the universe underwent a transition from being dominated by relativistic to nonrelativistic energy densities. Additionally, the acoustic peaks in the temperature spectrum of the CMB (as well as the Sachs-Wolfe plateau) provide evidence for a universe that passed from a period of nonrelativistic matter domination to the recent phase of accelerated expansion. Recognitions: Gold Member Science Advisor Quote by JuanCasado Thank you, but the different and consistent measurements of the Hubble parameter only determine the present rate of expansion, without saying nothing about the past or the future rates (in fact similar figures were obtained before SNe Ia observations ruling out deceleration at present)...Concerning the above quote, this is the commonly accepted model but ¿How did they observationally confirm the past deceleration of expansíon? Quote by bapowell The predictions of big bang nucleosynthesis are sensitively dependent on the expansion rate of the universe, which is determined in Friedmann cosmology (the modern concordance model) by the dominant source of energy (relativistic matter, nonrelativistic matter, vacuum) in the universe at the time. Following the big bang (and/or inflation), the universe was dominated by radiation (which is considered relativistic matter) and expanded as a power law. As the universe cooled, nonrelativistic matter began to dominate. At about this time, the CMB decoupled from the relativistic plasma -- so the very existence and subsequent decoupling of the CMB is evidence that the universe underwent a transition from being dominated by relativistic to nonrelativistic energy densities. Additionally, the acoustic peaks in the temperature spectrum of the CMB (as well as the Sachs-Wolfe plateau) provide evidence for a universe that passed from a period of nonrelativistic matter domination to the recent phase of accelerated expansion. Juan, have a look at this curve. It is the expansion history as generated by the Friedmann equation model. I want to use this to illustrate a point: http://ned.ipac.caltech.edu/level5/M...s/figure14.jpg You can see it relates redshift to lookback time (now = zero, start≈-14 Gy, i.e. -14 billion years) You can see in rough outline a period of deceleration followed by a period of acceleration, but that is not the point I want to make. The point is that in a mathematical science you aren't individually checking a lot of disconnected details, you FIT A MODEL to all the available data and look for the simplest model with the best fit. The Friedmann equation is a simplified form of the Einstein GR equation which comes equipped with 3 physical constants, G, Λ, c. (newton, cosmo const. Lambda, and speed of light). The Einstein GR equation is our LAW OF GRAVITY which has been tested hundreds of different ways. The Friedman equation, a version simplified by assuming approximate uniformity, inherits those 3 basic constants. The rest of the story is determining best estimates of things like densities of matter and radiation and current percentage rate of expansion---basically adjusting boundary conditions to get the best fit. There are many many different types of data. There is a huge amount of data. So you have to take this simple little equation and , by adjusting 3 or 4 numbers, make it fit tons and tons of data of all different kinds of observations. Here, in what I'm saying, I'm not trying to convince you of this or that proposition. I want to give an idea of the overall process. The approach is, in a sense, *holistic*. the model is a surprisingly simple equation and it generates the curve I showed you. but it also generates curves of TEMPERATURE and a curve of DENSITY, e.g. of matter, or of the ancient light called the "CMB". so all these things have to be cross-checked to see that they are physically consistent. the past rates of star formation have to check with what the model says was the past matter density the current temperature of the ancient light has to check with the matter density and temperature at the time it originated and the amount of expansion since then. expansion cools light by a known law. it is like the *cross examination* at a trial, this very simple equation is the "witness" and everything possible should be examined for consistency. ==quote Juan== ¿How did they observationally confirm the past deceleration of expansíon? ==endquote== I think the answer is that you don't directly observe an expansion rate at some moment in the past, or a rate of change of an expansion rate. You have a remarkably simple equation that gives an amazingly good fit to a huge amount of data. You continually interrogate this equation to make sure the story is physically consistent in every way you can think of. The equation is derived from the accepted law of gravity (=geometry). Alternative laws of gravity are constantly being invented and tried out, so far not demonstrating any advantage. The current consensus model will doubtless some day be successfully challenged, but so far it is passing all the tests people know how to devise. And it is what generates the curves like the one I showed. See also the "Figure 1" link in my signature. Quote by bapowell the acoustic peaks in the temperature spectrum of the CMB (as well as the Sachs-Wolfe plateau) provide evidence for a universe that passed from a period of nonrelativistic matter domination to the recent phase of accelerated expansion. Could you provide any reference directly linking the spectrum of the CMB and the deceleration phase? As fas as I know the very existence of CMB only demonstrates that at some moment in the past the temperature was high enough to keep the universe matter in a state of plasma... Marcus, thanks for your clear explanation. I was aware of the way the Concordance model fits with most observational data, but to my knowledge only the farthest away (and therefore less accurate) SNe Ia results are direct evidence of the deceleration rate, right? Recognitions: Science Advisor Quote by JuanCasado Could you provide any reference directly linking the spectrum of the CMB and the deceleration phase? As fas as I know the very existence of CMB only demonstrates that at some moment in the past the temperature was high enough to keep the universe matter in a state of plasma... Marcus, thanks for your clear explanation. I was aware of the way the Concordance model fits with most observational data, but to my knowledge only the farthest away (and therefore less accurate) SNe Ia results are direct evidence of the deceleration rate, right? The CMB constrains the content of the universe at various epochs. Given the content of the universe at a certain epoch, the Friedmann model tells us what the expansion rate was. So the CMB data do not directly constrain the rate of expansion. You are correct that the presence of the CMB indicates high temperatures, but take that a step further. Matter in equilibrium at high T is relativistic -- the constituent particles behave like radiation. This means that the universe must evolve in a particular way, specifically, as a power law: $a(t) \sim t^{1/2}$. Before the universe cools below the binding energy of hydrogen and the CMB decouples, it passes a point at which radiation and matter are in equal abundance. From this point on, non-relativistic matter dominates the expansion and the universe evolves as $a(t) \sim t^{2/3}$. The matter content of the universe (both dark matter and baryonic) can be determined by examining the relative heights of the 2nd and 3rd peaks in the CMB spectrum (see Wayne Hu's excellent CMB tutorials for more details: http://background.uchicago.edu/~whu/). Lastly, the broad anisotropy seen at large scales in the CMB spectrum (the Sachs-Wolfe plateau) is sensitive to the recent accelerated expansion. The fact that this effect is seen only at large spatial scales is related to its being a recent phenomenon. If the expansion had been accelerating all along, we'd have a very different CMB spectrum. For one, the plateau would extend across all scales, and there would be no acoustic peaks at all. For references, I'd recommend you spend some time looking through Wayne's tutorials to see how the CMB gives us insights into the composition and evolution of the universe. Also, any good introductory cosmology text that covers the thermal history of the universe should be helpful. Thread Tools | | | | |------------------------------------------------------------------------|------------------------------|---------| | Similar Threads for: Perpetual acceleration of the universe expansion? | | | | Thread | Forum | Replies | | | Special & General Relativity | 4 | | | Cosmology | 2 | | | Cosmology | 12 | | | Cosmology | 2 | | | General Astronomy | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320179224014282, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/124999/how-do-i-find-lim-n-to-infty-fracn-1nn/125150
# How do I find $\lim_{n\to\infty}(\frac{n-1}{n})^n$ How would I find the limit for: $$\lim_{n\to\infty}\left(\frac{n-1}{n}\right)^n$$ I know it approaches $\frac{1}{e}$, but I have no idea how it works. Plus, why does: $$\lim_{n\to\infty}\left(\frac{n-x}{n}\right)^n=\frac{1}{e^x}$$ - – matt Mar 27 '12 at 9:35 How has $e$ been defined to you? – Américo Tavares Mar 27 '12 at 9:57 e = $\lim_{n\to\infty}{(\frac{n+1}{n})^n}$ – user27251 Mar 27 '12 at 10:00 Thanks for the information. And is $n$ an integer? I assume it is. Would you mind confirming? – Américo Tavares Mar 27 '12 at 10:13 1 @user27251 Hint: $\frac{n-1}n = \frac 1{\frac n{n-1}}$ – martini Mar 27 '12 at 10:56 show 2 more comments ## 5 Answers 1. (See martini's comment). For the first question write $\frac{n-1}{n}$ as $$\begin{equation*} \frac{n-1}{n}=\frac{1}{\frac{n}{n-1}}. \end{equation*}$$ Apply limits and use the definition of $\mathrm{e}$ you have been given $$\begin{equation*} \mathrm{e}=\lim_{n\rightarrow \infty }\left( \frac{n+1}{n}\right) ^{n}. \end{equation*}$$ We have $$\begin{eqnarray*} \lim_{n\rightarrow \infty }\left( \frac{n-1}{n}\right) ^{n} &=&\lim_{n\rightarrow \infty }\frac{1}{\left( \frac{n}{n-1}\right) ^{n}}= \frac{1}{\lim_{n\rightarrow \infty }\left( \frac{n}{n-1}\right) ^{n}} \\ &=&\frac{1}{\lim_{n\rightarrow \infty }\left( \frac{n+1}{n}\right) ^{n+1}}= \frac{1}{\lim_{n\rightarrow \infty }\left( \frac{n+1}{n}\right) ^{n}\cdot\lim_{n\rightarrow \infty }\frac{n+1}{n}} \\ &=&\frac{1}{\mathrm{e}\cdot 1}=\frac{1}{\mathrm{e}}. \end{eqnarray*}$$ 2. Hint for the second question: write $$\begin{equation*} \left( \frac{n-x}{n}\right) ^{n}=\left( \left( 1-\frac{1}{n/x}\right) ^{n/x}\right) ^{x}, \end{equation*}$$ use the substitution $m=n/x$ and apply limits. - One of the basic properties of $e$ is that $$\lim_{ n \to \infty} \left(1+\frac{1}{n} \right)^n=e$$ You can use this here to find your answer. - For the first, take the reciprocal and substitute $n\mapsto n+1$ to get $$\begin{align} \frac{1}{\lim\limits_{n\to\infty}\left(\frac{n-1}{n}\right)^n} &=\lim_{n\to\infty}\left(\frac{n}{n-1}\right)^n\\ &\stackrel{n\to n+1}{=}\lim_{n\to\infty}\left(\frac{n+1}{n}\right)^{n+1}\\ &=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n\cdot\lim_{n\to\infty}\left(1+\frac{1}{n}\right)\\ &=e\cdot1 \end{align}$$ Therefore, $\lim\limits_{n\to\infty}\left(\frac{n-1}{n}\right)^n=\dfrac1e$. For the second, take the reciprocal and substitute $n\mapsto nx+x$ to get $$\begin{align} \frac{1}{\lim\limits_{n\to\infty}\left(\frac{n-x}{n}\right)^n} &=\lim_{n\to\infty}\left(\frac{n}{n-x}\right)^n\\ &\stackrel{n\to nx+x}{=}\lim_{n\to\infty}\left(\frac{nx+x}{nx}\right)^{nx+x}\\ &=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx}\cdot\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^x\\ &=e^x\cdot1 \end{align}$$ Therefore, $\lim\limits_{n\to\infty}\left(\frac{n-x}{n}\right)^n=\dfrac{1}{e^x}$. We could have started from the second and set $x=1$ for the first. - I do the first part and the second part is done similarly. Solution 1 We know that the derivative of $\ln x$ is $\frac{1}{x}$. Hence using first principle differentiation, we have: $$\begin{align*} \frac{1}{x}&=\lim_{h\to0}\frac{\ln(x+h)-\ln x}{h}\\ &=\lim_{h\to0}\ln\left(1+\frac{h}{x}\right)^{\frac{1}{h}} \end{align*}$$ Let $n=\frac{1}{h}$ and $y=\frac{1}{x}$, so now $$y=\lim_{n\to\infty}\ln\left(1+\frac{y}{n}\right)^n$$ Now let $y=-1$, then $$\begin{align*} -1&=\lim_{n\to\infty}\ln\left(1-\frac{1}{n}\right)^n\\ \frac{1}{e}&=\lim_{n\to\infty}\left(1-\frac{1}{n}\right)^n \end{align*}$$ Solution 2 Use L'Hopital's rule: $$\begin{align*} \lim_{n\to\infty}n\ln \left(1-\frac{1}{n}\right)&=-\lim_{n\to\infty}\frac{1}{1-\frac{1}{n}}\\ &=-1 \end{align*}$$ The rest is like solution 1. - ln is not additive. – hilbert Mar 27 '12 at 14:50 2 What does that have to do with my solution? – Vafa Khalighi Mar 27 '12 at 23:47 Here's a good strategy, first do the limit of the log of expression. Let $y = \left(1 - \frac{x}{n} \right)^n$ (which is the same as your expression). Then, take the natural log of both sides to get $$\ln y = \ln \left[\left(1 - \frac{x}{n} \right)^n \right] = n \cdot \ln \left(1 - \frac{x}{n} \right) = \frac{\ln \left(1 - \frac{x}{n} \right)}{\frac{1}{n}}$$ Now, as $n \to \infty$, the final fraction goes to $\frac{0}{0}$, an indeterminate form. This suggests that we try l'Hopital's rule. That is $$\begin{align*} \lim_{n \to \infty} \ln y &= \lim_{n \to \infty} \frac{\ln \left(1 - \frac{x}{n} \right)}{\frac{1}{n}} \\ &= \lim_{n \to \infty} \frac{\left(1 - \frac{x}{n}\right)^{-1} (x \cdot n^{-2})}{-n^{-2}} \\ &= \lim_{n \to \infty} -x\left(1 - \frac{x}{n} \right)^{-1} \\ &= -x \end{align*}$$ Therefore, since the exponential function $\exp(x) = e^x$ is continuous, we can move the limit in or outside of this function (by the definition of continuity) and thus find the limit of $y$ itself: $$\lim_{n \to \infty} y = \lim_{n \to \infty} \exp(\ln y) = \exp \left( \lim_{n \to \infty} \ln y \right) = \exp (-x) = e^{-x}$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.909809410572052, "perplexity_flag": "head"}
http://nrich.maths.org/6160
### Equation Matcher Can you match these equations to these graphs? ### Curve Fitter Can you fit a cubic equation to this graph? ### Guess the Function This task depends on learners sharing reasoning, listening to opinions, reflecting and pulling ideas together. # Real-life Equations ##### Stage: 5 Challenge Level: This is a list of many of the most important equations in science. In each case, we have labelled the two variable quantities $x$ and $y$. The letters $a, b$ stand for constants in each case Constant motion $\quad\quad\quad\quad\quad a = \frac{x}{y}$ Constant acceleration $\quad\quad\quad x = uy + \frac{1}{2} ay^2$ Beer Lambert Law $\quad\quad\quad\quad a=bxy$ Exponential decay $\quad\quad\quad\quad x=a e^{by}$ Michaelis-Menton $\quad\quad\quad\quad x = \frac{ay}{b+y}$ pH $\quad\quad\quad\quad\quad\quad\quad\quad\quad x = -\log_{10}(y)$ Can you identify the possible meanings of the variables $x$ and $y$ and the constants in each case? Four graphs are shown above, where the two axes intersect at the origin $(0, 0)$. The red crosses show four measurements. Although we do not know the numerical values (because there are no scales on the graphs), we can see whether the values are positive or negative in each variable. For example, the first measurement is positive in $x$ and positive in $y$; the second measurement is positive in $y$, negative in $x$. For processes evolving according to each of the equations above, which measurements are possible? The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8806557655334473, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/43214/how-is-a-stars-parent-galaxy-recognized?answertab=active
# How is a star's parent galaxy recognized? A star is probably visible/detected by it's radiation. But that star may or may not belong to our own galaxy ... yet news reports speak of detecting a star/nova in a distant galaxy. How does one determine whether the star she/he views belongs to Milky Way, or some other galaxy ... or is galactic orphan? Is it merely a matter of the distance to that star? - ## 3 Answers Every ordinary star we are able to individually observe is a part of the Milky Way. Well, except for stars in a small number of very nearby galaxies but even galaxies such as Andromeda look like a "continuum" so we're not observing the stars individually although we see that the galaxy isn't just a point. Only if a star goes nova (a lethal nuclear explosion of a white dwarf star) of supernova (a similar explosion but stronger), it may be observed outside the Milky Way. In all such cases we've experienced, one may always identify a galaxy at the same location that was known before the nova/supernova explosion. So the star going nova/supernova clearly belongs to that galaxy. Please note that distant galaxies look like dots – pretty much visually indistinguishable from stars in the Milky Way. A star going nova has 50,000-100,000 times higher luminosity than the Sun; the number is even higher for a supernova. That's a sufficient increase of the luminosity for an exploding star in a distant galaxy to become "almost as bright" as the whole galaxy, well, not quite. - 1 Just want to clarify: The absolute furthest galaxies may look like dots, but the galaxies we can see supernovae in are generally extended objects in even moderate telescopes. – Chris White Nov 1 '12 at 20:31 – Luboš Motl Nov 1 '12 at 20:54 Astrophysics is more than just cosmology, and indeed we study nearby SNe Ia just to figure out what they are, since no one knows for sure. Also most SNe are observed with $z < 1$ (including in fact all the SNe used by Riess and Perlmutter). Moreover, due to the turnover in the angular diameter distance as a function of redshift, galaxies will only ever appear so small. Though perhaps I have wandered a bit off topic from the OP. – Chris White Nov 1 '12 at 22:39 Chris, the couterintuitive behaviour of the angular diameter distance above z~1.5 is known to me, but I have a question: why doesn't that translates into an increase in aparent brightness? Surface flux is supposed to be conserved by means of Liouville's theorem, no matter how counterintuitive it may seem. – Eduardo Guerras Valera Nov 2 '12 at 3:14 – Eduardo Guerras Valera Nov 2 '12 at 12:39 The stars of our own galaxy are always much brighter than the stars of other galaxies. Just as a point of reference, the Milky way is about 100 thousand light years across. The nearest large galaxy, the Andromeda galaxy, is 2.2 million light years away. All of its stars would therefore be 20 times as far away than any star in our galaxy. Here's a picture of the Andromeda galaxy, for example: The stars that belong to the Andromeda galaxy are mostly not even recognizable as stars at all. They make up the fuzz of the disk. There are also two satellite galaxies that appear fuzzy in the image. Most other points of light are stars from our own galaxy, with a few faint fuzzies in the very distant background that are other, far more distant galaxies. Every other galaxy (and there are billions) is many times farther away, with the exception of the large and small magellanic clouds, which are satellites of the Milky Way. This coincidentally, was the reason why up until Edwin Hubble in the early 1900s, noone was able to determine whether galaxies were nebulae or separate, distinct objects from our own galaxy. It wasn't until the construction of the 100 inch Hooker telescope and the later 200 inch Mount Palomar telescope, that anyone was able to resolve any stars at all from these galaxies. - The key is, that the intrinsic brightness of all supernovae (at least for the most important type 1a) is roughly the same: it peaks around magnitude -19. From the difference with the apparent magnitude (called distance modulus) the distance to the supernova can be derived, and then compared to the distance to the suspected host galaxy (derived from its redshift). The problem is that for both distances, a model of universe must be assumed. But that is another question. For nearby supernovae and novae, the size of the expanding photosphere can be used as an alternate method for distance estimates too. This case is less usual. With some quasars the same question arises, because there is a galaxy in the line of sight. In this case, the difference in redshifts, and thus in distance, leaves no doubts, although both objects appear at the same position in the sky. - Both galaxies and supernovae are redshifted. You just need to compare the redshift of the host galaxy to the supernova in question, and you can discern whether or not the supernova - or quasar - belongs to that galaxy or not. – Ernie Dec 13 '12 at 17:05 @Ernie: no, you hardly ever measure supernovae redshifts. Distances are to supernovae are mostly derived from the extrapolated maximum brightness. It is uncommon to have a supernova so bright and nearby as to make detailed spectroscopic measures. Besides that, supernova lines are very broad, due to the high expansion velocity of the shell. And the more interesting ones for cosmology are the farthest ones, where integrated photometric measures are possible, but spectroscopy is harder to do. – Eduardo Guerras Valera Dec 13 '12 at 18:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9403051137924194, "perplexity_flag": "middle"}
http://scratchapixel.com/lessons/3d-basic-lessons/lesson-4-geometry/math-operations-on-points-and-vectors/
Home • Home • Lessons • ## 3D Basic Lessons A set of lessons to learn step by step and from the ground up, the most basic techniques used to render photorealistic CG pictures. Best read in chronological order. • ## 3D Advanced Lessons A series of lessons on advanced 3D techniques that are used to enhance the realism of CG renders. Lessons can be read in no particular order and individually. • ## 2D Image Processing A series of lessons to learn 2D techniques used in applications such as Photoshop to read write and process digital images. • FAQ • Contact • Math Operations on Points and Vectors # Math Operations on Points and Vectors Now that we have explained the concept of (cartesian) coordinate system (and how points' and vectors' coordinates relate to coordinate systems), we can look at some of the most common operations which can be performed on points and vectors. This should cover the most common functions you will find in any 3D application and renderer. ## Vector Class in C++ First lets define what our C++ Vector class will look like: 1 2 3 4 5 6 7 8 9 template<typename T> class Vec3 { public: Vec3() : x(T(0)), y(T(0)), z(T(0)) {} Vec3(T xx) : x(xx), y(xx), z(xx) {} Vec3(T xx, T yy, T zz) : x(xx), y(yy), z(zz) {} T x, y, z; }; ## Vector Length As we mentioned in the previous paragraph, a vector can be seen as an arrow starting from one point and finishing to another. The vector itself indicates not only the direction of point B from A but also can be used to find out the distance that separates point B from point A. This is given by the length of a vector which can easily be computed with the following formula: $$\scriptsize ||V|| = \sqrt{V.x * V.x + V.y * V.y + V.z * V.z}$$ In mathematics, the double bar (||V||) notation indicates the lentgh of a vector. The vector's length is sometimes also called norm or magnitude (figure 1). 1 T magnitude() const { return sqrt(x * x + y * y + z * z); } Note that the axes of the three-dimensional cartesian coordinate systems are unit vectors. ## Normalizing a Vector A normalized vector (we will use normalize with a z here which is the standard in the industry), is a vector whose length is 1 (vector B in figure 1). Such a vector is also called a unit vector (it is a vector which has unit length). Normalizing a vector is very simple. We first compute the length of the vector and divide each one of the vectors coordinates with this length. The mathematical notation is: $$\scriptsize \hat{V} = {V \over { || V || }}$$ Figure 1: the magnitude or length of vector A and B is denoted by the double bar notation. A normalized vector is a vector whose length is 1 (in this example vector B). Note that the C++ implementation can be optimised. First we only normalize the vector if its length is greater than 0 (as dividing by 0 is forbidden). We then compute a temporary variable which is the invert of the vector length, and multiply each coordinate of the vector with this value rather than dividing them with the vector's length. As you may know, multiplications in a program are less costly than divisions. This optimisation can be important, as normalizing of vector is an extremely common operations in a renderer which can be applied to thousands, hundreds of thousands, millions of vectors (when not more). At this level, any possible optimisation will have an impact on the final render time. Note though that some compilers will manage that for you under the hood. But you can always make that optimisation explicit in your code. 1 void normalize() { T mag = magnitude(); if (mag) *this *= 1 / mag; } In mathematics, you will also find the term norm to define a function that assigns a length or size (or distance) to a vector. The function we have just described is called the Euclidean norm. ## Dot Product Figure 2: the dot product of two vectors can be seen as the projection of A over B. if the two vectors A and B have unit length then the result of the dot product the is cosine of the angle subtended by the two vectors (θ). The dot product or scalar product requires two vectors A and B and can be seen as the projection of one vector onto the other. The result of the dot product is a real number (a float or double in programming). A dot product between two vectors is denoted with the dot sign: $$\scriptsize A \cdot B$$ (but can also be sometimes written as $$\scriptsize <a, b>$$). The dot product consists of multiplying each element of the A vector with its counterpart from vector B and taking the sum of each product. In the case of 3D vectors (length of the vector is three, they have three coefficients or elements which are x, y and z), it consists of the following operation: $$\scriptsize A \cdot B = A.x * B.x + A.y * B.y + A.z * B.z$$ Note that this is quite similar to the way we compute the length (distance this time) of a vector. If we take the square root (\\scriptsize (\sqrt{A \cdot B}\)) of the dot product between two vectors which are equal (A=B), then we obtain the length of the vector. We can write: $$\scriptsize ||V||^2=V \cdot V$$ It can be used to simplify the implementation the C++ code sometimes: 1 2 3 4 5 6 7 8 9 T dot(const Vec3<T> &v) const { return x * v.x + y * v.y + z * v.z; } void normalize() { T len2 = *this.dot(*this); if (len2) { T invLength = T(1) / sqrt(len2); *this *= invLength; } } The dot product between two vectors is an extremely important and common operation in any 3D application because the result of this operation relates to the cosine of the angle between the two vectors. Figure 2 illustrates the geometric interpretation of the dot product. In this example vector A is projected in the direction of vector B. • if B is a unit vector then the product $$\scriptsize A \cdot B$$ gives $$\scriptsize ||A||cos(\theta)$$, the magnitude of the projection of A in the direction of B, with a minus sign if the direction is opposite. This is called the scalar projection of A onto B. • when neither A nor B is a unit vector, we can write that $$\scriptsize A \cdot { B \over ||B|| }$$ since B as a unit vector is $$\scriptsize B \over ||B||$$. • when the two vectors are normalised then taking the arc corsine of the dot product gives you the angle $$\scriptsize \theta$$ between the two vectors: $$\scriptsize \theta = acos({{A \cdot B} \over {||A||\:||B||}})$$ or $$\scriptsize \theta=acos(\hat A \cdot \hat B)$$. The dot product is a very important operation in 3D. It can be used for many things. As a test of orthogonality. When two vectors are perpendicular to each other (A.B), the result of the dot product between these two vectors is 0. When the two vectors are pointing in opposite directions (A.C), the dot product returns -1. When they are pointing in the exact same direction (A.D), it returns 1. It is also used intensively to find out the angle between two vectors or compute the angle between a vector and the axis of a coordinate system (which is useful when the coordinates of a vector are converted to spherical coordinates. This explained in the chapter on trigonometric functions). ## Cross Product The cross product is also an operation on two vectors, but to the difference of the dot product which returns a number, the cross product returns a vector. The particularity of this operation is that the vector resulting from the cross product is perpendicular to the other two (this is shown in figure 3). The cross product operation is written using the following syntax: $$\scriptsize C = A \times B$$ Figure 3: the cross product of two vectors A and B gives a vector C perpendicular to the plane defined by A and B. When A and B are orhotogonal to each other (and have unit length), A, B, C form a Cartesian coordinate system. To compute the cross product we will need to implement the following formula: $$\scriptsize \begin{array}{l} C_X = A_Y * B_Z - A_Z * B_Y \\ C_Y = A_Z * B_X - A_X * B_Z \\ C_Z = A_X * B_Y - A_Y * B_X \\ \end{array}$$ The result of the cross product is another vector which is orthogonal to the other two. A cross product between two vectors is denoted with the cross sign: $$\scriptsize A \times B$$. The two vectors A and B define a plane and the resulting vector C is perpendicular to that plane. Vectors A and B don't have to be perpendicular to each other but when they are the resulting A B and C vectors form a cartesian coordinate system (assuming the vectors have unit length). This is particularly useful to create coordinate systems which we will explain in the chapter Creating a Local Coordinate System. 1 2 3 4 5 6 7 Vec3 cross(const Vec3<T> &v) const { return Vec3( y * v.z - z * v.y, z * v.x - x * v.z, x * v.y - y * v.x); } If you need a mnemonic way of remembering this formula we like to the technique that consists of asking ourselves the question "why z?", y and z being the coordinates of vector A and B which are used to compute the x coordinate of the resulting vector C. More seriously, logic can easily be used to reconstruct this formula. Since you know that the result of the cross product is a vector perpendicular to the other two, you know that if A and B are the x- and y-axis of a cartesian coordinate system, the cross product of A and B should give you the z-axis that is (0,0,1). The only way you can get this result is if Cz = 1 which is only true when Cz = A.z * B.y - A.y * B.x. From there you can deduce the other coordinates which are used to compute Cx and Cy. Finally the easiest method might simple be to write the cross production operation in the following form: $$\scriptsize \begin{pmatrix}a_x \\ a_y \\ a_z\end{pmatrix} \times \begin{pmatrix}b_x \\ b_y \\ b_z\end{pmatrix} = \begin{pmatrix}a_yb_z - a_zb_y \\ a_zb_x - a_xb_z \\ a_xb_y - a_yb_x\end{pmatrix}$$ Presenting the vector in a column vector form, shows that for the coordinate computed (for example x) we need to use the other two (for example y and z) from vector A and B. It is very important to note that the order of the vectors involved in the cross product has an effect on the resulting vector C. If we take the previous example (taking the cross product between the x- and they y-axis of a cartesian coordinate system), note that A x B doesn't give you the same result as B x A: AxB = (1,0,0)x(0,1,0) = (0,0,1) BxA=(0,1,0)x(1,0,0)=(0,0,-1) Figure 4: using your left or right hand to determine the orientation of vector C (the normal for instance) when the index fingers points along A and the middle finger points along B. We say that the cross product is anticommutative (swapping the position of any two arguments negates the result): If AxB=C then BxA=-C. Remember from the previous chapter that when two vectors are used to define the first two basis of a coordinate system, the third vector can point on either side of the plane. We also described a technique in which you use your hands to differentiate the two systems. When you compute a cross product between vectors you will always get the same unique solution. For instance if A = (1, 0, 0) and B = (0, 1, 0), C can only be (0, 0, 1). So you might ask why should I care about the handedness of my coordinate system then? Because if the result of the computation is always the same, the way you will draw the resulting vector however, depends on the handedness of your coordinate system. You can use the same mnemonic technique to find out in which direction the vector should point to depending on the convention you are using. In the case of a right-hand coordinate system, if you align the index finger along the A vector (for example the tangent at a point on the surface) and the middle finger along the B vector (the bitangent if you try to figure out the orientation of a normal), the thumb will point in the direction of the C vector (the normal). Note that if you use the same technique but with the left hand on the same vectors A and B, your the thumb will point in the opposite direction. Remember though, that this only a representation issue. Figure 5: using you right hand, you can align your index finger along either A or B and the middle finger against the other vector (B or A) to find out if C (the normal for instance) point upwards or inwards in the right-hand coordinate system. In mathematics, the result of a cross product is called a pseudovector. The order of the vector in the cross product operation is important when surface normals are computed from the tangent and bitangent at the point where the normal is computed. Depending on this order, the resulting normal can either be pointing towards the interior of the surface (inward-pointing normal) or away from it (outward-pointing normal). You can find more information on this topic in the chapter Creating an Orientation Matrix. ## Vector/Point Addition and Subtraction Other mathematical operations one points are usually straightforward. A multiplication of a vector by a scalar or another vector gives a point. We can add two vectors to each other, subtract them, divide them, etc. Note that some 3D APIs makes the distinction between points, normals and vectors. Technically they are subtle differences between each of them which can justify to create three separate C++ classes. For example: normals are not transformed like points and vectors (we will learn about that in this lesson), subtracting two points technically gives a vector, adding a vector to another vector or a point gives a point, etc. However, from practice, we found that writing these three C++ distinct classes to represent each type is not worth some of the complexity that comes with it. Similarly to OpenEXR which has become an industry standard, we chose to represent all types with a single templated class called Vec3. We therefore make no distinction between normal, vector and points (from a coding point of view). We will just need to manage the (rare) exceptions where variables representing different types (normal, vector, points) but declared under the generic type Vec3, should be processed differently. Here is some C++ code to represent the most common operations (see the Download section of this lesson for the full code): 1 2 3 4 5 6 7 Vec3 operator + (const Vec3<T> &v) const { return Vec3<T>(x + v.x, y + v.y, z + v.z); } Vec3 operator * (const T &val) const { return Vec3<T>(x * val, y * val, z * val); } Vec3 operator / (const T &val) const { T invVal = T(1) / val; return Vec3<T>(x * invVal, y * invVal, z * invVal); } Vec3 operator / (const Vec3<T> &v) const { return Vec3<T>(x / v.x, y / v.y, z / v.z); } Vec3 operator * (const Vec3<T> &v) const { return Vec3<T>(x * v.x, y * v.y, z * v.z); } Vec3 operator - (const Vec3<T> &v) const { return Vec3<T>(x - v.x, y - v.y, z - v.z); } Vec3 operator - () const { return Vec3<T>(-x, -y, -z); } « Previous Chapter Chapter 3 of 13 Next Chapter »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 17, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8945413827896118, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/281155/let-t-s-mathcal-p-rightarrow-mathcal-p-be-such-that-t-circ-s-is-identit?answertab=oldest
# Let $T,S :\mathcal P \rightarrow \mathcal P$ be such that $T \circ S$ is identity I came across the above problem and was trying to solve.Could someone point me in the right direction? Thanks everyone in advance for your time. - I think it's a red herring that the elements of the vector space are polynomials. I think all you really need to use is that the vector space is infinite-dimensional (and it may help that you know an explicit basis). – Gerry Myerson Jan 18 at 2:24 What do you mean, “solve”? This was a multiple choice question, with possible answers (A) through (D). One of them is true and the other three are false; do you know which is which? – Lubin Jan 18 at 4:02 @user336440 you are free to upvote any/all answers that are helpful, even if you've accepted one of them: you can accept and upvote. To upvote an answer, you click on the "greyed" out "upwards" arrow above the answer's vote-count (to the left of the answer). ;-) – amWhy Jan 21 at 17:18 ## 4 Answers Hint: Use what you know about a vector space (and the axioms satisfied) and what this means given that $P$ is a vector space. Think of the linear maps $T, S$ as linear operators on $p \in P$: each mapping P \to P, and whose composition is the identity map $(T\circ S)(p) = T(S(p)) = p\in P$. What do you know about two maps, when composed, being an identity map? Does anything change if you take $S(T(p)) = (S \circ T)p\;$? Caveat Be careful to distinguish the case of finite-dimensional vector spaces, from infinite dimensional vector spaces. What is true in finite-dimensional spaces, does not necessarily hold in infinite dimensional vector spaces, as is the case here. - I see that $(T\circ S)(p) = T(S(p)) = T(p')=p\in P$. But $(S\circ T)(p) = S(T(p)) =S(p')= ??\in P$.I am confused here. – user33640 Jan 18 at 3:30 Thanks a lot sir for the detailed clarification.I have got it. – user33640 Jan 18 at 3:45 You're very welcome. – amWhy Jan 18 at 3:46 @amWhy, I don’t know what you mean by “the” inverse of a linear map. All we know is that $T$ is one-to-one and $S$ is onto, so that $T$ has a left inverse, and $S$ has a right inverse. – Lubin Jan 18 at 4:06 – amWhy Jan 18 at 4:21 show 2 more comments Hint: Try to think of operators $T,S$ satisfying the hypotheses. Think calculus. - With apologies to @amWhy and appreciation to @Gerry and @Christopher, I would like to offer an expanded answer to the question. The whole point of the problem posed was to make the reader aware of the distinction between finite-dimensional and infinite-dimensional vector spaces. For self-maps, that is endomorphisms, of finite-dimensional spaces, a linear map is onto if and only if it’s one-to-one. This comes by dimension-counting, and one consequence is that if $S\circ T$ is identity, then so is $T\circ S$. For matrices, this means that if a square matrix $A$ has a left inverse $B$, then $B$ is also a right inverse of $A$. The situation is quite different for infinite-dimensional spaces, and here just because $S\circ T$ is identity, there’s no justification in saying that $T\circ S$ also is identity. @Christopher’s hinted example of integration as your $T$, which sends $x^n$ to $x^{n+1}/(n+1)$, and differentiation as your $S$, which sends $x^n$ to $nx^{n-1}$, is apposite. Note that integration is one-to-one, two different functions have different antiderivatives, and differentiation is onto, every polynomial is the derivative of some polynomial. And in this case, $T\circ S$ sends any polynomial $f(x)$ to itself as long as $f$ has no constant term, but sends all constants to zero. So it certainly is neither one-to-one nor onto. That’s why (A) was the correct answer, even though (B) would have been the correct answer for a finite-dimensional space. I’d like to point out that the one-to-one map $T$ has inverses, but not uniquely in this case: since its image (“range” in the terminology of many) is not the whole of the space, there is considerable freedom in choosing a left inverse $S$ for it. For instance, we could have proclaimed that $S(f)=f'$ for polynomials without constant term, but $S(c)=c(17-3x^2)$ for constants $c$. - Let $S:P\to P$ be such that $(Sp)(x)=\int_0^xp(x)dx$ & $T:P\to P$ be such that $(Tp)(x)=\frac{d}{dx}p(x).$ It's a matter of verification that both of $S,~T$ are linear and $TS$ is the identity transformation where none of $S$ and $T$ are identity. So (C) is false. Note for $p(x)=x^2+5x+2,~(ST)p=S(2x+5)=\int_0^x(2x+5)dx=x^2+5x\neq p(x)\implies ST$ is not identity whence (A) is true and (B) is false. Again there's no $\alpha\in\mathbb R$ such that $x^2+5x+2\neq2x+5=T(x^2+5x+2)\implies$ (D) is false. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553176760673523, "perplexity_flag": "head"}
http://planetmath.org/LocallyRingedSpace
# 1 Definitions A locally ringed space is a topological space $X$ together with a sheaf of rings $\mathcal{O}_{X}$ with the property that, for every point $p\in X$, the stalk $(\mathcal{O}_{X})_{p}$ is a local ring11All rings mentioned in this article are required to be commutative.. A morphism of locally ringed spaces from $(X,\mathcal{O}_{X})$ to $(Y,\mathcal{O}_{Y})$ is a continuous map $f:X\longrightarrow Y$ together with a morphism of sheaves $\phi:\mathcal{O}_{Y}\longrightarrow\mathcal{O}_{X}$ with respect to $f$ such that, for every point $p\in X$, the induced ring homomorphism on stalks $\phi_{p}:(\mathcal{O}_{Y})_{{f(p)}}\longrightarrow(\mathcal{O}_{X})_{p}$ is a local homomorphism. That is, $\phi_{p}(y)\in\mathfrak{m}_{p}\text{ for every }y\in\mathfrak{m}_{{f(p)}},$ where $\mathfrak{m}_{p}$ (respectively, $\mathfrak{m}_{{f(p)}}$) is the maximal ideal of the ring $(\mathcal{O}_{X})_{p}$ (respectively, $(\mathcal{O}_{Y})_{{f(p)}}$). # 2 Applications Locally ringed spaces are encountered in many natural contexts. Basically, every sheaf on the topological space $X$ consisting of continuous functions with values in some field is a locally ringed space. Indeed, any such function which is not zero at a point $p\in X$ is nonzero and thus invertible in some neighborhood of $p$, which implies that the only maximal ideal of the stalk at $p$ is the set of germs of functions which vanish at $p$. The utility of this definition lies in the fact that one can then form constructions in familiar instances of locally ringed spaces which readily generalize in ways that would not necessarily be obvious without this framework. For example, given a manifold $X$ and its locally ringed space $\mathcal{D}_{X}$ of real–valued differentiable functions, one can show that the space of all tangent vectors to $X$ at $p$ is naturally isomorphic to the real vector space $(\mathfrak{m}_{p}/\mathfrak{m}_{p}^{2})^{*}$, where the ${}^{*}$ indicates the dual vector space. We then see that, in general, for any locally ringed space $X$, the space of tangent vectors at $p$ should be defined as the $k$–vector space $(\mathfrak{m}_{p}/\mathfrak{m}_{p}^{2})^{*}$, where $k$ is the residue field $(\mathcal{O}_{X})_{p}/\mathfrak{m}_{p}$ and ${}^{*}$ denotes dual with respect to $k$ as before. It turns out that this definition is the correct definition even in esoteric contexts like algebraic geometry over finite fields which at first sight lack the differential structure needed for constructions such as tangent vector. Another useful application of locally ringed spaces is in the construction of schemes. The forgetful functor assigning to each locally ringed space $(X,\mathcal{O}_{X})$ the ring $\mathcal{O}_{X}(X)$ is adjoint to the “prime spectrum” functor taking each ring $R$ to the locally ringed space $\operatorname{Spec}(R)$, and this correspondence is essentially why the category of locally ringed spaces is the proper building block to use in the formulation of the notion of scheme. Type of Math Object: Definition Major Section: Reference Groups audience: ## Mathematics Subject Classification 14A15 Schemes and morphisms 18F20 Presheaves and sheaves ## Recent Activity May 17 new image: sinx_approx.png by jeremyboden new image: approximation_to_sinx by jeremyboden new image: approximation_to_sinx by jeremyboden new question: Solving the word problem for isomorphic groups by mairiwalker new image: LineDiagrams.jpg by m759 new image: ProjPoints.jpg by m759 new image: AbstrExample3.jpg by m759 new image: four-diamond_figure.jpg by m759 May 16 new problem: Curve fitting using the Exchange Algorithm. by jeremyboden new question: Undirected graphs and their Chromatic Number by Serchinnho ## Info Owner: djao Added: 2002-05-01 - 21:09 Author(s): djao ## Versions (v13) by djao 2013-03-22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 39, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9051201939582825, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/53212/special-relativity-acceleration-and-being-in-two-places-at-once-how-to-resolve
# Special relativity, acceleration and being in two places at once (How to resolve this paradox?): Imagine I am on Earth I have a clock which measures time $t$. When my Earth clock reads time $t$, then when I look at a moving spacecraft's clock I see the time $t'$. Let us assume time is measured in years. Let us suppose the clocks were initially synchronized so that when $t=0$, $t'= 0$. Now let us suppose that the speed of the rocket continues to increase in the following way (Note that I have increased the speed in discrete time steps for convenience, but I'm sure a continuous speed function could be obtained but just mathematically tedious): At $t = 0$, $v = c\sqrt{3/4}$. So when $t = 1$, $\delta t'= \frac{\delta t}{\gamma} = 0.5.$ At $t = 1$, $v = c\sqrt{15/16}$. So when $t = 2$, $\delta t'= \frac{\delta t}{\gamma} = 0.25.$ At $t = 2$, $v = c\sqrt{63/64}$. So when $t = 2$, $\delta t'= \frac{\delta t}{\gamma} = 0.125.$ and so on... If $v$ continues to increase over time in this way, then $t'$ will approach 1, but will never reach 1. It appears from the perspective of the Earthling, that the person in the rocket never ages by over 1 year. From the person in the rocket however, he will experience time as normal. Now let us assume that after the person in the rocket has aged 10 years, he leaves his rocket ship, and travels back to Earth, leaving his rocket to continue accelerating in the same pattern. Now when the man reaches Earth, will he not see himself in the rocket, where he sees himself not having aged 1 year? (Ignore optical effects for the purpose of this question.) - I don't think using special relativity in this regime is valid, since the person on the ship is accelerating. SR is only valid for inertial (non-accelerating) frames. You will probably have to analyze this problem using the general relativistic formulae for time dilation. – Kitchi Feb 6 at 15:36 – Chris Feb 6 at 15:37 "Now when the man reaches Earth, will he not see himself in the rocket," He can't get back to Earth faster than (or even as fast as) light, so no, he won't. By the time he returns the light arriving from the craft will be from after he left. This consideration effectively shows that your discrete approximation to the integral is incorrect. – dmckee♦ Feb 6 at 15:40 That's where the paradox lies dmckee, because the spacecraft never appears to reach the time he left from the perspective of the Earth. The space craft only is seen to reach 1 year, where as he left at 10 years. So if you calculate that he returns to Earth at t = 1000000 then you will still only be able to calculate t' as <1. – Chris Feb 6 at 15:41 @dmckee, it shows that the approximation is incorrect OR there is a different resolution to the paradox OR it is a genuine paradox. I will attempt to find a continuous function v(t) so that the above point is still valid. This seems intuitively possible to me, since I can make the time steps arbitrarly small, and choose velocities to give the gamma factors which cause t' to assymptote at 1 – Chris Feb 6 at 15:50 show 5 more comments ## 1 Answer Your example can't be usefully described because when you change speed you are changing your inertial frame but you are not specifying how you change frames i.e. what acceleration you use. Special relativity is perfectly capable of describing accelerated motion. See for example John Baez's article on the relativistic rocket, which gives for constant acceleration: $$t_{earth} = \frac{c}{a} sinh\left(\frac{a \space t_{rocket}}{c}\right)$$ where $a$ is the acceleration of the rocket must be measured at any given instant in a non-accelerating frame of reference travelling at the same instantaneous speed as the rocket. Since $sinh(x)$ remains finite for all finite $x$, constant acceleration will never give the effect that you want, i.e. for $t_{earth}$ to become infinite at some finite value of $t_{rocket}$. The only way you can achieve this is for the acceleration of the rocket to become infinite, and obviously this isn't physically possible. However there is an analogy that is close to your question. If you throw someone into a black hole then they will measure a finite time to reach the event horizon while you will never see them reach the horizon i.e. you would have to measure an infinite time before they reach the horizon. The link to your rocket is that the acceleration a shell observer measures for the falling object tends to infinity as the shell observer approaches the event horizon. You can think of this infinite acceleration giving an infinite time dilation just as infinite acceleration would for an accelerating rocket. But back to the point of your question: although the infalling observer measures a finite time to reach the event horizon, once they've done so an infinite time has passed for the Schwarzschild observers watching them fall. So there is no outside universe to return to. Not that you can return of course, because once you've reached the event horizon all timelike paths lead to the singularity and your inevitable doom. - – Chris Feb 6 at 16:26 @Chris: well yes, but the suggested function has the acceleration going to infinity as $x$ goes to one, and the infinite acceleration causes the infinite time dilation. For the Schwarzschild metric I don't think the acceleration measured by the shell observer has a tanh dependence on $r$ - when time permits I'll have a rummage through my GR textbooks. – John Rennie Feb 6 at 16:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9494871497154236, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/15231/newtons-second-law-of-motion-in-terms-of-momentum
# Newton's second law of motion in terms of momentum I am reading a document and in answer to the question State Newton’s second law of motion the candidate answers that The force acting on an object equals the rate of change of momentum of the object. While this is not a complete answer, the examiner picks on on a word equal and says that it is proportional instead. Now I understand that $F=\frac{dp}{dt}$ where both $F$ and $p$ are vectors — what is "proportional" about it? I checked my Good Old Ohanian (2nd edition) and it says explicitly "The rate of change of momentum equals force" in section 5.5 The Momentum of a Particle. What is this examiner talking about? - 1 The examiner is stupid, as usual. They are equal. – Ron Maimon Sep 29 '11 at 17:07 This is not psychology.stackexchange but the examiner almost certainly was confusing momentum with velocity, and replacing force equals change of momentum with change in velocity in his head. Why must some person's incompetence be a question on this site? – Ron Maimon Sep 29 '11 at 17:43 – Gruffalo Sep 29 '11 at 18:06 404 link now dead. – Qmechanic♦ Apr 21 at 10:47 ## 5 Answers This actually comes down to a question about the interpretation of units, and it's kind of a tricky issue. On one hand, you can view any equation in physics as nothing more than a mathematical relationship between some numbers. This was the view taken in the early days of quantitative physics,* back in the 17th and 18th centuries, when the concepts of force and momentum were just being quantified. Since there was very little in the way of collaboration, the idea of a standardized unit system hadn't really taken off, so if you were doing an experiment to establish the relationship between force, momentum, and time, the units you used would have been determined by your equipment. In other words, all you would know is that you have a force meter (scale) which will give you a number proportional to the force, a "momentum meter" which will give you a number proportional to the momentum change, and a clock of some sort (perhaps a pendulum) which will give you a number proportional to the time. Let's say it's been established that the relationship is linear. You would probably run an experiment in which you apply a certain quantity of force for a given number of ticks of the clock, and change in momentum off your measuring device. Then you would plug those numbers - it's important to notice that you're only dealing with numbers, since there are really no meaningful units to speak of - into the discrete approximation of Newton's second law: $$F^{(N)} = K_F^{(N)}\frac{\Delta p^{(N)}}{\Delta t^{(N)}}$$ Here I've used the superscript $^{(N)}$ to indicate pure numerical value, i.e. the number you read off the scale/clock/meter. Plugging in these numbers will allow you to determine the value of the constant $K_F^{(N)}$. Hopefully it's obvious that the value of this constant will depend on how your equipment is calibrated, or in other words, which unit system you're using. If hypothetical-experimenter you decided to label your unit of force the pound $\mathrm{lb_F}$, your unit of momentum $\mathrm{lb_M}\;\mathrm{ft}\;\mathrm{s}^{-1}$, and your unit of time the second, you would find that $$K_F^{(N)} = 32.174$$ Back in those days, before people started really thinking about units, all multiplicative equations in physics were thought of as simple relationships between numbers. Accordingly, they included constants of proportionality which were customized to each lab's equipment. Eventually, as more people started doing physics, there arose a need for standardized unit system so you could compare data from different labs. Then you could write the above equation as $$\frac{F}{\mathrm{lb_F}} = \frac{K_F}{\mathrm{lb_F}/(\mathrm{lb_M}\;\mathrm{ft}\;\mathrm{s}^{-2})}\frac{\bigl(\frac{\Delta p}{\mathrm{lb_M}\;\mathrm{ft}\;\mathrm{s}^{-1}}\bigr)}{\bigl(\frac{\Delta t}{\mathrm{s}}\bigr)}$$ because the numerical value of a measurement is just the measurement divided by its unit. You can algebraically rearrange this to $$F = K_F\frac{\Delta p}{\Delta t}$$ where $$K_F = K_F^{(N)}\frac{\mathrm{lb_F}}{\mathrm{lb_M}\;\mathrm{ft}\;\mathrm{s}^{-2}}$$ So Newton's second law has shifted from being a simple relationship between numbers, to being a relationship between physical quantities which are expressed as multiples of a reference value. However, you still have a conversion constant in the equation. Symbolically, it's independent of units, but you still do have to plug in a different number depending on which combination of units you want to work with. This is the sense in which $F$ is only proportional to $\frac{\mathrm{d}p}{\mathrm{d}t}$. In the modern scientific community, on the other hand, I think most people would agree that units are a human invention, and that physical quantities should exist in some sense independently of the units we choose to use for them. Taking that view, there should be some "natural" way to express the equations of physics that doesn't incorporate any "unit system artifacts" like these proportionality constants. The way we do this is to define the units as abstract objects and develop a set of rules for manipulating them (kind of like unit vectors). We can then incorporate the conversion constants into those rules. For example, let's again consider the discrete approximation of Newton's second law, but this time without the conversion constant written into it: $$F = \frac{\Delta p}{\Delta t}$$ You can still use seconds for time and $\mathrm{lb_M}\;\mathrm{ft}\;\mathrm{s}^{-1}$ for momentum in this equation. When you read the numbers off your measuring equipment and plug them into the formula, you'll do it like this: $$F = \frac{\Delta p^{(N)}\ \mathrm{lb_M}\;\mathrm{ft}\;\mathrm{s}^{-1}}{\Delta t^{(N)}\ \mathrm{s}} = \frac{\Delta p^{(N)}}{\Delta t^{(N)}}\mathrm{lb_M}\;\mathrm{ft}\;\mathrm{s}^{-2}$$ Suppose you want your answer in pounds of force. You would look up the multiplication rule for $\mathrm{lb_M}\;\mathrm{ft}\;\mathrm{s}^{-2} \to \mathrm{lb_F}$, which in this case can be found on Wikipedia: $$\mathrm{lb_F} = 32.174\ \mathrm{lb_M}\;\mathrm{ft}\;\mathrm{s}^{-2}$$ (in general you might have to chain a few rules together to get the right conversion). So you wind up with $$F = \frac{1}{32.174}\frac{\Delta p^{(N)}}{\Delta t^{(N)}}\mathrm{lb_F}$$ It works out to the same thing as before, but this time the conversion constant $K_F$ is part of the unit system, not the equation. This means that if you're not plugging actual values into the equation, you don't have to think about units or proportionality constants at all. And if you look at it this way, $F$ is actually equal to $\frac{\mathrm{d}p}{\mathrm{d}t}$. So what's the verdict? Unfortunately, there's no unassailable answer to this question of whether Newton's second law is a proportionality or an equality. Depending on how you think about it, either answer could be valid. But I would say the "equality" answer, which corresponds to the modern view of units, is conceptually cleaner. It's accepted by all competent modern physicists, as far as I know (for mechanics, at least; electromagnetism is a whole different story), and it's certainly the interpretation we try (however unsuccessfully) to instill into introductory physics students' minds. I'd definitely agree that the examiner was being unreasonably picky (though to be fair, he did give credit for it). *I don't have an explicit source, so I'm not entirely sure this is the way things were really developed; I'm basing my description on some fuzzy memories. That being said, the backstory does help clarify the various ways in which we treat units, so consider it historical fiction if you must. - Thank you David. I think now I understand the context of this question much much better. – Gruffalo Sep 30 '11 at 7:54 From Principia Mathematica - The Mathematical Principles of Natural Philosophy by Isaac Newton translated by John Machin - volume 1 Definition II The quantity of motion is the measure of the same, arising from the velocity and quantitiy of matter conjunctly. The motion of the whole is the sum of the motion of the sum of the parts; and therefore in a body double in quantity, with equal velocity, the motion is double; with twice the velocity, it is quadruple. So this is where he defines momentum as being proportional to the produce of mass and velocity, but doesn't give his second law in the form of a similar definition. The Stanford Encylopedia of Philosophy states The modern F=ma form of Newton's second law nowhere occurs in any edition of the Principia even though he had seen his second law formulated in this way in print during the interval between the second and third editions in Jacob Hermann's Phoronomia of 1716. Instead, it has the following formulation in all three editions: A change in motion is proportional to the motive force impressed and takes place along the straight line in which that force is impressed. In the body of the Principia this law is applied both to discrete cases, in which an instantaneous impulse such as from impact is effecting the change in motion, and to continuously acting cases, such as the change in motion in the continuous deceleration of a body moving in a resisting medium. Newton thus appears to have intended his second law to be neutral between discrete forces (that is, what we now call impulses) and continuous forces. (His stating the law in terms of proportions rather than equality bypasses what seems to us an inconsistency of units in treating the law as neutral between these two.) The important point is that however Newton expresses his second law, he doesn't use units but rather proportions and this has been interpreted as the impressed force being proportional to the rate of change of momentum, rather than being equal to it. Nowadays we define force in terms of units so that one unit of force equals one unit of rate of change of momentum which means the constant of proportionality equals one. Therefore we could argue that the examiner is correct for emphasising that Newton didn't give his his second law as an equation with units, but neither did he explicitly state what it was in the form of a definition. - The point is that the constant of proportionality, if you choose your units stupidly, is independent of anything. In these circumstances, you should drop the constant. – Ron Maimon Sep 29 '11 at 17:19 Thank you for your answer John. I understand that we could say F ~ dp but I do not understand why we should. The clear example is Hooke's law with its F ~ x. Obviously it does require a constant of proportionality because otherwise it would fail dimensional analysis. In the Newton's second law the constant of proportionality is dimensionless number one. Saying "force is proportional to the rate of change of momentum" sounds like sophistry to me. Am I missing something really important here? – Gruffalo Sep 29 '11 at 17:36 Ok, even though I can't find it now, I am pretty sure Newton says: the impressed force is the change in the quantity of motion (momentum). – Ron Maimon Sep 29 '11 at 17:38 @ron yes you're right, although he doesn't give his second law in words in Principia Mathematica. – John McVirgo Sep 29 '11 at 21:32 @gruffalo, the examiner is just emphasising that Newton didn't give an equation with units. – John McVirgo Sep 29 '11 at 21:33 show 2 more comments If you put $F=\frac{dp}{dt}$ your a saying th same as the candidate. $F=K\frac{dp}{dt}$ Actually there should be a constant of proportionality. It is like in the case of the electrostatic force. $F=\frac{K q_1 q_2}{r^2}$. $K$ is the constant. But depending of the units you choose, $K$ could be $1$ (gaussian system) or $9 \times 10^9$(international system). - 1 I am saying the same as the candidate, indeed. My question is why the examiner is not happy with this. You said there should be a constant of proportionality but you did not explain why. I do not think there should be one and this is why I posted my question. Although I am not familiar with the electrostatic force yet, I am not sure how reasoning by analogy would help me here. Nevertheless, thank you for trying to help. – Gruffalo Sep 29 '11 at 17:20 There should be a constant of proportionality. Because the very word "proportional" means the left side quantity increases or decreases strictly inline with the right side quantity. The constant is the number by which the ratio of increment or decrement is decided. For example: If you eat 1 KG butter, you will gain weight for sure. So a food technician might put it in a way "Human weight gain is directly proportional to the weight of the butter consumed". But it is not guaranteed that 1 kg butter will lead to 1kg weight gain. So he can do further research on the human body & digestion capabilities & may find out that it could cause 0.3 kg weight gain. Now he can put it in an equation as `(Human Weight gain by consuming butter = 0.3* 1 KG Butter)`. Assuming that all humans have the same digestive capabilities, 0.3 becomes a constant. So for a statement with "Proportional" there should be a constant if you're trying to write an equation. In cases where the constant is one, then it doesn't matter. In case of F~ma, I am not sure why no constant is mentioned anywhere. But I guess, it is written as f=ma, probably by experiments they would have found out that F is equals the m*a, making the constant value as 1. Also another expression W (weight)=mg (mass*acceleration due to gravity) also does not show any constant, probably making the constant value as 1. --- apologies for any grammatical mistakes, as I am not a native English speaker. - There are two things in this context : 1. First taking k =1 is OK as long as you use F = ma to define unit of F as 1 Newton. (All proportionality constants such as the one in Coulomb law are not taken to be unity.) 2. When done as above, k can be taken as unit-less But if you opt to use another force equation such as Hookes law (F = C.x to define unit of F, then k in F = k ma will have units.) - ## protected by Qmechanic♦Apr 21 at 10:49 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9669943451881409, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/electroweak
Tagged Questions The electroweak tag has no wiki summary. 1answer 70 views Higgs boson mass and electroweak energy scale Is it a coincidence that the mass of the Higgs boson is exactly half the electroweak energy scale? 1answer 165 views What breaks the symmetry between the electromagnetic and weak nuclear force? I know the electromagnetic force is mediated by a photon and the weak nuclear force is mediated by two massive bosons. Are there any other insights into why the masses are so different? 2answers 125 views Do any good theories exist on why the weak interaction is so profoundly chiral? I find the profound asymmetry in the sensitivity of left and right chiral particles to be one of the most remarkable analytical observations captured in the Standard Model. Yet for some, I've not ... 0answers 98 views Weak isospin and types of weak charge My understanding is that QCD has three color charges that are conserved as a result of global SU(3) invariance. What about SU(2) weak? Does it have two types of charges? What I'm getting at is: U(1) ... 0answers 75 views Is the search for a Simple-group-based Electro-Weak theory over? Just wondering: We know that, in its current form of the $SU(2)_L\times U(1)$, the electroweak theroy rides a wave of huge success. However, is it not possible that the correct simple group ... 0answers 60 views CP-violation in weak and strong sectors There is a possible CP-violating term in the strong sector of the standard model proportional to $\theta_\text{QCD}$. In the absence of this term, the strong interactions are CP-invariant. In the ... 0answers 17 views Hamiltonian of the charged current in SM (related to the lorentz invariance) recently when I was studying the scatterings which involves a vector boson (like W boson) as an intermediate particle, I saw that the propagator is not Lorentz invariant, I read that there is another ... 0answers 66 views Why does the electron energy distribution from muon decay peak near the kinematic maximum? I'm trying to understand why when you have a muon decay event, the energy of the electron peaks near the maximum kinematically allowed value. Is there an intuitive explanation for why this is the case ... 1answer 351 views Jarlskog Invariant and its mathematical origin CP violation is present in the weak interactions if There are no degeneracies in the up-quark/down-quark matrices The Jarlskog invariant $J=Im(V_{us} V_{cb} V_{ub}^* V_{cs}^*)$ is nonvanishing ... 0answers 104 views How to show the oblique parameters S, T, and U are coefficients of d=6 operators In Morii, Lim, Mukherjee, The Physics of the Standard Model and Beyond. 2004, ch. 8, they claim that the Peskin–Takeuchi oblique parameters S, T and U are in fact Wilson coefficients of certain ... 0answers 105 views What is the rate of B violation expected in the standard model during high energy collisions? In a recent question Can colliders detect B violation? I asked about detecting B violation in collisions. Here I am interested in the theory aspect. (I asked both questions originally in the same ... 3answers 154 views mechanism of annihilation Can the annihilation of matter and antimatter be explained by the electro-weak interaction? Can pair-production be explained in the same way? 0answers 147 views Can the mass of longitudinal and transverse W bosons be measured separately? Some higgsless unified models of particle physics predict that the mass of longitudinally polarized W bosons and the mass of transversely polarized W bosons are different. In those models, a ... 0answers 74 views $WW\to t\bar{t}$ growth I was told recently that "it is well known that processes like $WW\to t\bar{t}$ ($t$ being a top, or any massive fermion) grows linearly with the energy in the absence of an Higgs boson." Does anyone ... 1answer 223 views Why is there no theta-angle (topological term) for the weak interactions? Why is there no analog for $\Theta_\text{QCD}$ for the weak interaction? Is this topological term generated? If not, why not? Is this related to the fact that $SU(2)_L$ is broken? 1answer 201 views If LHC searches of a Higgs boson won't be a success, what consequences for the theory of electroweak interaction it can bear? Whether it is necessary to search still for variants of an explanation of spontaneously breaking gauge symmetry, giving masses for a W, Z-bosons? Goldstone bosons are bosons that appear necessarily ... 2answers 266 views Might the LHC see nothing new at all? There's no guarantee that supersymmetry (or more exotic new physics) will be seen at the LHC. Meanwhile, it's standard lore that a Higgsless standard model becomes nonunitary somewhere in the vicinity ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9150184392929077, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/17852/basic-free-fall-problem
# Basic Free Fall Problem [closed] So I am trying to remember my childhood doing some Physics problems, but seems that I forgot almost everything. But it's not a big thing... that's why I'm training! My problem is pretty basic although I remember just a little... let me ennounce it: A 5 kg block of ice falls away from the edge of the roof of a block of flats, at a height of 26 m above the ground. Ignoring air drag, find out: a) how long it takes for the ice block to hit the ground. b) what is the speed at which the ice block hits the ground. c) how much energy it transfers to the surroundings when it comes to a stop and break into pieces. (g = 9.8 m/s^2) For a), I tried: t = d/r, and gives me 2.653 secs. For b), I tried: 9.8 * 2.653, and this results in 25.9994 m/s For c) I tried nothing since I don't remember what should I do. Can someone take a look at this problem and tell me if I am doing it right, or just messing all the things? Thank you! - 1 Hi Peter, and welcome to Physics Stack Exchange! I see that you got some useful answers, which is good, but I wanted to mention that this is not really a site for homework help, it's a site for general conceptual questions. In other words, you should not be asking "am I doing this right?" but rather something like "what does this formula mean?", just to give an example. I've closed this to indicate that it's not really the kind of question for this site. For future reference, remember that you can always edit closed questions to make them more appropriate and get them reopened. – David Zaslavsky♦ Dec 5 '11 at 22:10 ## closed as too localized by David Zaslavsky♦Dec 5 '11 at 22:05 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, see the FAQ. ## 2 Answers Unfortunately, you are messing up quite a bit. d/g gives 2.653 s${}^2$. You should re-read about free fall in any basic physics book. There, you'll see that when you have constant acceleration (in this case, its $g$), the height as a function of time is $$y(t) = y_0+v_0\,t+1/2\,g\,t^2$$ where $y_0$ and $v_0$ are your starting height and speed, and $y(t)$ is the height at instant $t$. You can solve for the final time knowing that the initial speed is zero, and the distance the block falls is $H=y(t_f)-y_0$. The final expression is also in any book. The final speed can also be calculated from the equation for speed: $$v(t) = v_0 + g\,t$$ For the energy question, I could give you the formula, but I rather suggest that you first understand clearly the basic concepts of kinematics and dynamics before going to the next level of abstraction that is mechanical energy. - Thank you very much! :) – Peter Dec 5 '11 at 13:46 Welcome to physics.stackexhange Peter! The results that you calculated are not correct. What you need are the basic equations of motion $$\begin{align} & v & = &u+at \\ & s & = &ut + \tfrac12 at^2 \\ \end{align}$$ With the initial velocity $u$, the acceleration due to gravity $a=g$. For c) you need the kinetic energy of a moving body or the potential energy of your block before the fall. - Thanks to you as well! :) – Peter Dec 5 '11 at 14:01 Or You think about potential energy. This is much simpler to calculate. – Georg Dec 5 '11 at 14:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9560133814811707, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Kaluza%E2%80%93Klein_theory
# Kaluza–Klein theory Beyond the Standard Model Simulated Large Hadron Collider CMS particle detector data depicting a Higgs boson produced by colliding protons decaying into hadron jets and electrons Standard Model Evidence Theories Experiments In physics, Kaluza–Klein theory (KK theory) is a model that seeks to unify the two fundamental forces of gravitation and electromagnetism. The theory was first published in 1921. It was proposed by the mathematician Theodor Kaluza who extended general relativity to a five-dimensional spacetime. The resulting equations can be separated into further sets of equations, one of which is equivalent to Einstein field equations, another set equivalent to Maxwell's equations for the electromagnetic field and the final part an extra scalar field now termed the "radion". ## Overview The space M × C is compactified over the compact set C, and after Kaluza–Klein decomposition we have an effective field theory over M. A splitting of five-dimensional spacetime into the Einstein equations and Maxwell equations in four dimensions was first discovered by Gunnar Nordström in 1914, in the context of his theory of gravity, but subsequently forgotten. Kaluza published his derivation in 1921 as an attempt to unify electromagnetism with Einstein's general relativity. In 1926, Oskar Klein proposed that the fourth spatial dimension is curled up in a circle of a very small radius, so that a particle moving a short distance along that axis would return to where it began. The distance a particle can travel before reaching its initial position is said to be the size of the dimension. This extra dimension is a compact set, and the phenomenon of having a space-time with compact dimensions is referred to as compactification. In modern geometry, the extra fifth dimension can be understood to be the circle group U(1), as electromagnetism can essentially be formulated as a gauge theory on a fiber bundle, the circle bundle, with gauge group U(1). In Kaluza–Klein theory this group suggests that gauge symmetry is the symmetry of circular compact dimensions. Once this geometrical interpretation is understood, it is relatively straightforward to replace U(1) by a general Lie group. Such generalizations are often called Yang–Mills theories. If a distinction is drawn, then it is that Yang–Mills theories occur on a flat space-time, whereas Kaluza–Klein treats the more general case of curved spacetime. The base space of Kaluza–Klein theory need not be four-dimensional space-time; it can be any (pseudo-)Riemannian manifold, or even a supersymmetric manifold or orbifold or even a noncommutative space. As an approach to the unification of the forces, it is straightforward to apply the Kaluza–Klein theory in an attempt to unify gravity with the strong and electroweak forces by using the symmetry group of the Standard Model, SU(3) × SU(2) × U(1). However, an attempt to convert this interesting geometrical construction into a bona-fide model of reality flounders on a number of issues, including the fact that the fermions must be introduced in an artificial way (in nonsupersymmetric models). Nonetheless, KK remains an important touchstone in theoretical physics and is often embedded in more sophisticated theories. It is studied in its own right as an object of geometric interest in K-theory. Even in the absence of a completely satisfying theoretical physics framework, the idea of exploring extra, compactified, dimensions is of considerable interest in the experimental physics and astrophysics communities. A variety of predictions, with real experimental consequences, can be made (in the case of large extra dimensions/warped models). For example, on the simplest of principles, one might expect to have standing waves in the extra compactified dimension(s). If a spatial extra dimension is of radius R, the invariant mass of such standing waves would be Mn = nh/Rc with n an integer, h being Planck's constant and c the speed of light. This set of possible mass values is often called the Kaluza–Klein tower. Similarly, in Thermal quantum field theory a compactification of the euclidean time dimension leads to the Matsubara frequencies and thus to a discretized thermal energy spectrum. Examples of experimental pursuits include work by the CDF collaboration, which has re-analyzed particle collider data for the signature of effects associated with large extra dimensions/warped models. Brandenberger and Vafa have speculated that in the early universe, cosmic inflation causes three of the space dimensions to expand to cosmological size while the remaining dimensions of space remained microscopic. ## Space-time-matter theory One particular variant of Kaluza–Klein theory is space-time-matter theory or induced matter theory, chiefly promulgated by Paul Wesson and other members of the so-called Space-Time-Matter Consortium.[1] In this version of the theory, it is noted that solutions to the equation $R_{AB}=0\,$ with RAB the five-dimensional Ricci curvature, may be re-expressed so that in four dimensions, these solutions satisfy Einstein's equations $G_{\mu\nu} = 8\pi T_{\mu\nu}\,$ with the precise form of the Tμν following from the Ricci-flat condition on the five-dimensional space. Since the energy-momentum tensor Tμν is normally understood to be due to concentrations of matter in four-dimensional space, the above result is interpreted as saying that four-dimensional matter is induced from geometry in five-dimensional space. In particular, the soliton solutions of RAB = 0 can be shown to contain the Friedmann–Lemaitre–Robertson–Walker metric in both radiation-dominated (early universe) and matter-dominated (later universe) forms. The general equations can be shown to be sufficiently consistent with classical tests of general relativity to be acceptable on physical principles, while still leaving considerable freedom to also provide interesting cosmological models. ## Geometric interpretation The Kaluza–Klein theory is striking because it has a particularly elegant presentation in terms of geometry. In a certain sense, it looks just like ordinary gravity in free space, except that it is phrased in five dimensions instead of four. ### The Einstein equations The equations governing ordinary gravity in free space can be obtained from an action, by applying the variational principle to a certain action. Let M be a (pseudo-)Riemannian manifold, which may be taken as the spacetime of general relativity. If g is the metric on this manifold, one defines the action S(g) as $S(g)=\int_M R(g) \mathrm{vol}(g)\,$ where R(g) is the scalar curvature and vol(g) is the volume element. By applying the variational principle to the action $\frac{\delta S(g)}{\delta g} = 0$ $R_{ij} - \frac{1}{2}g_{ij}R = 0$ Here, Rij is the Ricci tensor. ### The Maxwell equations By contrast, the Maxwell equations describing electromagnetism can be understood to be the Hodge equations of a principal U(1)-bundle or circle bundle π: P → M with fiber U(1). That is, the electromagnetic field F is a harmonic 2-form in the space Ω2(M) of differentiable 2-forms on the manifold M. In the absence of charges and currents, the free-field Maxwell equations are dF = 0 and d*F = 0. where * is the Hodge star. ### The Kaluza–Klein geometry To build the Kaluza–Klein theory, one picks an invariant metric on the circle S1 that is the fiber of the U(1)-bundle of electromagnetism. In this discussion, an invariant metric is simply one that is invariant under rotations of the circle. Suppose this metric gives the circle a total length of Λ. One then considers metrics $\widehat{g}$ on the bundle P that are consistent with both the fiber metric, and the metric on the underlying manifold M. The consistency conditions are: • The projection of $\widehat{g}$ to the vertical subspace $\mbox{Vert}_pP \subset T_pP$ needs to agree with metric on the fiber over a point in the manifold M. • The projection of $\widehat{g}$ to the horizontal subspace $\mbox{Hor}_pP \subset T_pP$ of the tangent space at point p ∈ P must be isomorphic to the metric g on M at π(p). The Kaluza–Klein action for such a metric is given by $S(\widehat{g})=\int_P R(\widehat{g}) \;\mbox{vol}(\widehat{g})\,$ The scalar curvature, written in components, then expands to $R(\widehat{g}) = \pi^*\left( R(g) - \frac{\Lambda^2}{2} \vert F \vert^2\right)$ where π* is the pullback of the fiber bundle projection π: P → M. The connection A on the fiber bundle is related to the electromagnetic field strength as $\pi^*F = \mathrm{d}A$ That there always exists such a connection, even for fiber bundles of arbitrarily complex topology, is a result from homology and specifically, K-theory. Applying Fubini's theorem and integrating on the fiber, one gets $S(\widehat{g})=\Lambda \int_M \left( R(g) - \frac{1}{\Lambda^2} \vert F \vert^2 \right) \;\mbox{vol}(g)$ Varying the action with respect to the component A, one regains the Maxwell equations. Applying the variational principle to the base metric g, one gets the Einstein equations $R_{ij} - \frac{1}{2}g_{ij}R = \frac{1}{\Lambda^2} T_{ij}$ with the stress-energy tensor being given by $T^{ij} = F^{ik}F^{jl}g_{kl} - \frac{1}{4}g^{ij} \vert F \vert^2,$ sometimes called the Maxwell stress tensor. The original theory identifies Λ with the fiber metric g55, and allows Λ to vary from fiber to fiber. In this case, the coupling between gravity and the electromagnetic field is not constant, but has its own dynamical field, the radion. ### Generalizations In the above, the size of the loop Λ acts as a coupling constant between the gravitational field and the electromagnetic field. If the base manifold is four-dimensional, the Kaluza–Klein manifold P is five-dimensional. The fifth dimension is a compact space, and is called the compact dimension. The technique of introducing compact dimensions to obtain a higher-dimensional manifold is referred to as compactification. Compactification does not produce group actions on chiral fermions except in very specific cases: the dimension of the total space must be 2 mod 8 and the G-index of the Dirac operator of the compact space must be nonzero.[2] The above development generalizes in a more-or-less straightforward fashion to general principal G-bundles for some arbitrary Lie group G taking the place of U(1). In such a case, the theory is often referred to as a Yang–Mills theory, and is sometimes taken to be synonymous. If the underlying manifold is supersymmetric, the resulting theory is a supersymmetric Yang–Mills theory. ## Empirical tests Up to now, no experimental or observational signs of extra dimensions have been officially reported. Many theoretical search techniques for detecting Kaluza-Klein Resonances have been proposed using the mass couplings of such resonances with the top quark, however until the LHC reaches full operational power observation of such resonances are unlikely. An analysis of results from the Large Hadron Collider in December 2010 severely constrains theories with large extra dimensions.[3] The Discovery of a new boson with Higgs-like decay channels measured experimentally to significance to 4.9 sigma puts a brand new empirical test in the search for Kaluza–Klein Resonances and Supersymmetric Particles. The loop Feynman Diagrams that exist in the Higgs Interactions allow any particle with electric charge and mass to run in such a loop. Standard Model particles besides the top quark and W boson do not make big contributions to the cross-section observed in the H → γγ decay, but if there are new particles beyond the Standard Model, they could potentially change the ratio of the predicted Standard Model H → γγ cross-section to the experimentally observed cross-section. Hence a measurement of any dramatic change to the H → γγ cross section predicted by the Standard Model is crucial in probing the physics beyond it. ## References 1. L. Castellani et al., Supergravity and superstrings, Vol 2, chapter V.11 • Nordström, Gunnar (1914). "Über die Möglichkeit, das elektromagnetische Feld und das Gravitationsfeld zu vereinigen". 15: 504–506. OCLC 1762351. • Kaluza, Theodor (1921). "Zum Unitätsproblem in der Physik". : 966–972.  http://archive.org/details/sitzungsberichte1921preussi • Klein, Oskar (1926). "Quantentheorie und fünfdimensionale Relativitätstheorie". 37 (12): 895–906. Bibcode:1926ZPhy...37..895K. doi:10.1007/BF01397481. • Witten, Edward (1981). "Search for a realistic Kaluza–Klein theory". 186 (3): 412–428. Bibcode:1981NuPhB.186..412W. doi:10.1016/0550-3213(81)90021-3. • Appelquist, Thomas; Chodos, Alan; Freund, Peter G. O. (1987). Modern Kaluza–Klein Theories. Menlo Park, Cal.: Addison–Wesley. ISBN 0-201-09829-6.  (Includes reprints of the above articles as well as those of other important papers relating to Kaluza–Klein theory.) • Brandenberger, Robert; Vafa, Cumrun (1989). "Superstrings in the early universe". Nuclear Physics B 316 (2): 391–410. Bibcode:1989NuPhB.316..391B. doi:10.1016/0550-3213(89)90037-0. • Duff, M. J. (1994). "Kaluza-Klein Theory in Perspective". In Lindström, Ulf (ed.). Proceedings of the Symposium ‘The Oskar Klein Centenary’. Singapore: World Scientific. pp. 22–35. ISBN 981-02-2332-3. • Overduin, J. M.; Wesson, P. S. (1997). "Kaluza–Klein Gravity". Physics Reports 283 (5): 303–378. arXiv:gr-qc/9805018. Bibcode:1997PhR...283..303O. doi:10.1016/S0370-1573(96)00046-4. • Wesson, Paul S. (1999). Space-Time-Matter, Modern Kaluza-Klein Theory. Singapore: World Scientific. ISBN 981-02-3588-7. • Wesson, Paul S. (2006). Five-Dimensional Physics: Classical and Quantum Consequences of Kaluza-Klein Cosmology. Singapore: World Scientific. ISBN 981-256-661-9. ## Further reading • Grøn, Øyvind; Hervik, Sigbjørn (2007). Einstein's General Theory of Relativity. New York: Springer. ISBN 978-0-387-69199-2. • Kaku, Michio and Robert O'Keefe. Hyperspace: A Scientific Odyssey Through Parallel Universes, Time Warps, and the Tenth Dimension. New York: Oxford University Press, 1994. ISBN 0-19-286189-1 • The CDF Collaboration, Search for Extra Dimensions using Missing Energy at CDF, (2004) (A simplified presentation of the search made for extra dimensions at the Collider Detector at Fermilab (CDF) particle physics facility.) • John M. Pierre, SUPERSTRINGS! Extra Dimensions, (2003). • TeV scale gravity, mirror universe, and ... dinosaurs Article from Acta Physica Polonica B by Z.K. Silagadze. • Chris Pope, Lectures on Kaluza–Klein Theory.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8529242277145386, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/80081/what-are-good-examples-of-spin-manifolds/80231
## What are “good” examples of spin manifolds? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm trying to get a grasp on what it means for a manifold to be spin. My question is, roughly: What are some "good" (in the sense of illustrating the concept) examples of manifolds which are spin (or not spin) (and why)? For comparison, I'd consider the cylinder and the mobius strip to be "good" examples of orientable (or not) bundles. I've read the answers to http://mathoverflow.net/questions/66681/classical-geometric-interpretation-of-spinors which are helpful, but I'd like specific examples (non-examples) to think about. - 3 The manifold $X$ is spin iff the loop space $LX$ is orientable. But I don't think this will help. If you look at $spin^c$, almost complexe manifold $M$ is $spin^c$. All of the 4 dimention manifold is $spin^c$. So if more over, $det^(1/2) TM$ exist, $M$ is spin. – shu Nov 4 2011 at 20:02 3 Projective spaces are good examples and non-examples. Similarly, you can consider discrete quotients of spheres. Then depending on the group you quotient by, the resulting space is (or not) spin. – José Figueroa-O'Farrill Nov 4 2011 at 20:13 1 @Otis: See a preprint by Stephan Stolz and Peter Teichner (web.me.com/teichner/Math/Surveys_files/MPI.pdf), where they prove that M is spin if and only if LM is fusion-orientable and M is string if and only if LM is fusion-spin. – Dmitri Pavlov Nov 5 2011 at 10:48 1 @Otis:For the spin^c and spin , the reference is the lovely book write by Morgan. press.princeton.edu/titles/5866.html. – shu Nov 5 2011 at 21:02 1 @shu, Otis: The fusion condition mentioned by Dmitri Pavlov is essential - a real Enriques surface is not spin, but its loop space is orientable. – Konrad Waldorf Nov 7 2011 at 8:55 show 1 more comment ## 4 Answers There's the traditional obstruction-theoretic perspective. Orientability means the tangent bundle trivializes over a 1-skeleton. Dually you could think of that as saying the complement of a co-dimension $2$ subcomplex has a trivial tangent bundle. So admitting a spin structure is the same, but it will be the tangent bundle trivializes over a 2-skeleton, dually the complement of a co-dimension three subcomplex admits a trivial tangent bundle. A surface is orientable if and only if it contains no Moebius bands -- a regular neighbourhood of any simple closed curve must be a cylinder. In higher dimensions this translates into a manifold being orientable if and only if it contains no twisted bundles $D^{n-1} \rtimes S^1$, i.e. regular neighbourhoods of simple closed curves are diffeomorphic to $D^{n-1} \times S^1$. For spin structures there's something very similar. Of course, a surface admits a spin structure if and only if it is orientable. It's a more interesting notion in higher dimensions. The statement there is the manifold is orientable, and if you take a regular neighbourhood of any surface in the manifold, then it has a trivial tangent bundle. So manifolds like $\mathbb RP^3$ are perfectly valid spin manifolds -- $\mathbb RP^3$ contains $\mathbb RP^2$ but the total space of its normal bundle has a perfectly trivializable tangent bundle. Technically, the condition is a little stronger than that -- you can trivialize the tangent bundle of the complement of a co-dimension $3$ subset. So not only can you trivialize the total spaces of normal bundles of surfaces, but even the regular neighbourhoods of unions of surfaces. So if you want a manifold that isn't spin, the archetype would be a vector bundle over a surface so that the total space does not have a trivializable tangent bundle. Take the $D^2$-bundle over $S^2$ with Euler Class $\chi$. I think this happens if and only if $\chi$ is even. I suppose you have more entertaining examples when dealing with the regular neighbourhood of a 2-complex that isn't itself a manifold. edit: Milnor's "Spin structures on manifolds" in L'Enseignement Mathematique Vol 9 (1963) is an excellent reference for most of the above. I don't believe he goes into all the descriptions above since I think he wants to keep the article simple. The Poincare duality interpretation above is a very standard mode of thinking that's employed throughout much of low-dimensional topology. Kirby's book on 4-manifolds is a nice place to look for this material. Specifically, R. Kirby "The topology of 4-manifolds" Springer-Verlag (1989). A more modern reference would be Gompf and Stipsicz, but again I don't think they use all the above descriptions. Milnor and Stasheff's "Characteristic Classes" describes most of the basic constructions involved above, in the obstruction theory section. In a couple of months I'll be putting up a paper on the arXiv that gives some very combinatorial ways of describing spin and spin^c-structures on manifolds (mostly for computer implementation). I hope that will be a good reference, too! But the paper is still unreadable. - Thanks! This is enlightening. Can you suggest a reference for this? – Otis Chodosh Nov 4 2011 at 23:14 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A simply connected $4$-manifold is spin iff all embedded oriented surfaces have even self-intersection number or, equivalently, if the quadratic form $H_2 (M;Z) \to Z$ induced by the intersection form takes even values. This is by the following string of arguments: 1. $M$ is spin iff $w_2 (TM)=0$. 2. $w_2 (TM)=0$ iff the linear form $H_2 (M; Z/2) \to Z/2$, $a \mapsto \langle w_2 (TM);a\rangle$ is null. 3. Any class $a \in H_2 (M;Z)$ can be represented as the fundamental class of an embedded oriented surface $F \subset M$. 4. $w_2 (TM)|_F = w_2 (\nu_F)$ by the product formula for Stiefel-Whitney classes and because $F$ is spin. 5. $w_2 (\nu_F)$ is the mod $2$ reduction of the Euler class of the normal bundle of $F$. 6. $\langle [F]; \chi(\nu_F) \rangle$ is the self-intersection number of $F$, or equivalently, the value of the quadratic form at $[F]$. Now you should play a bit with $4$-manifolds and might get a feeling for the spin condition. - If you know about Steenrod operations, here's a very convenient characterization: A manifold $M$ is Spin iff its Poincare duality in $H^*(M,\mathbb Z/2)$ is compatible with $Sq^1$ and $Sq^2$. Similarly, oriented manifolds are those whose Poincare duality in $H^*(M,\mathbb Z/2)$ is compatible with $Sq^1$. The story continues: String manifolds have a Poincare duality in $H^*(M,\mathbb Z/2)$ that is compatible with $Sq^1$, $Sq^2$ and $Sq^4$ (but now, that's no longer an if and only if). My paper http://arxiv.org/abs/0810.2131 with Chris Douglas and Mike Hill describes all that in detail and provides many concrete examples. - If $M$ is a spin manifold, then any submanifold of codimension 1 is also a spin manifold. This yields a lot of examples, for example, that $S^n$ is spin etc. (I may not have understood your point completely.) Edit: As pointed out in the comments, one has to demand as well that the submanifold is orientable. - this is false: see Ryan Budney's answer (RP^2 in RP^3). Maybe this is true if the submanifold is codimension 1 and 2-sided? – Agol Nov 5 2011 at 21:31 2 It's true so long as the normal bundle to the codimension 1 submanifold is trivial. So if both manifolds involved are orientable then it's true. – Vitali Kapovitch Nov 6 2011 at 1:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.910577654838562, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/5763/ndsolve-and-results-of-the-previous-computations
# NDSolve and results of the previous computations I want to solve following system of ODEs: $\Bigg\{ \begin{array}{} \frac{\partial C}{\partial t}=\frac{2W_b}{Rsin(\theta)}(1-\frac{C}{\gamma})+\frac{\nu}{\pi R^2}\frac{\partial C}{\partial Z} \\ \frac{\partial R}{\partial t}=\frac{W_b}{\gamma sin(\theta)} \end{array}$ Where $W_b=k_bC_h(1-\frac{C}{C_h})^{\frac{4}{3}}$ and $\nu, k_b, C_h, h, \gamma, \theta$ are constants. It is known that: $\frac{\partial C}{\partial Z}=\frac{C_i-C_{i-1}}{Z_i-Z_{i-1}}$, where $Z_i-Z_{i-1}=h$ and $h$ is a constant. So, the first equation become this: $\frac{\partial C}{\partial t}=\frac{2W_b}{Rsin(\theta)}(1-\frac{C}{\gamma})+\frac{\nu}{\pi R^2}\frac{C_i-C_{i-1}}{h}$ My question is how can I get the result of previous computation, namely $C_{i-1}$? Here is the mathematica code I wrote: ````Wb[C_] := kb*Ch*(1 - C/Ch)^(4/3); system := { c'[t] == (2*Wb[c[t]])/(r[t]*Sin[theta])*(1 - C/gamma) + v/(Pi*r[t]^2)*(c[t] - ?? )/h, r'[t] == Wb[c[t]]/(gamma*Sin[theta]), c[0] == 0, r[0] == 0.15}; solution = First@NDSolve[system, {c, r}, {t, 0, 25900000}, Method -> "BDF"]; ```` UPDATE: Simplified version of the problem. This is a model based on a plug flow reactor. $\Bigg\{ \begin{array}{} \dot{x} = \frac{c_1 x f(x)}{y} + \frac{\partial x}{\partial z} \\ \dot{y} = c_2 f(x) \end{array}$ Where c1 and c2 are constants. This is a model of a physical process and it was shown that `z` dimension is quantified by a chunks of a constant size `h` and `x` is monotonously increasing, so $\frac{\partial x}{\partial z}=\frac{\Delta x}{\Delta z}=\frac{x_i - x_{i-1}}{h}$ ````f[x_] := ...; system := { x'[t] == c1*x*f[x[t]]/y[t] + (x[t] - ?)/h, y'[t] == c2*f[x[t]], x[0] == 0, y[0] == 0.15}; solution = First@NDSolve[system, {x, y}, {t, 0, 25900000}, Method -> "BDF"]; ```` - First of all, you can already solve for R(t) by hand. Second, the remaining equation probably should have C replaced by C_i. Is that what you mean? You will then also need the initial conditions for all C_i, I would guess. With that, you would have a coupled system of first-order equations in time for the C_i. – Jens May 19 '12 at 4:50 I've updated the question. Wb is a function dependent on C, I'm not sure if it possible to solve it by hand. – Andrew May 19 '12 at 14:20 1 So, initially $x=x(y,z)$ but you would like to replace this 2d function by a finite number of 1d functions, ie, you'd like to discretise the $z$ direction by hand. Right? – acl May 19 '12 at 17:07 1 @Andrew then you need to solve a set of differential equations for $x_i(y,t)$ (ie, a set of variables, not a single var) and $y(x_1,\ldots,x_N,t)$, whereas you're trying to set the problem up for a single $x$. – acl May 19 '12 at 17:30 1 Look up Delayed Differential Equations in the documentation (howto/SolveDelayDifferentialEquations). Your ?? will be x[t-h]. Also figure out the correct initial condition because you will need to specify it for a range of values. – Daniel Lichtblau May 20 '12 at 16:10 show 8 more comments ## 2 Answers It seems (from the comments) that what you want to do is this: initially x=x(y,z) but you would like to replace this function of 2 vars by a finite number of functions of 1 var, ie, you'd like to discretize the $z$ direction by hand. This means that you need to solve a set of differential equations for the $x_i(y,t)$ (a set of functions, not a single fun) and $y(x_1,\ldots,x_N,t)$. You're trying to set the problem up for a single $x$, and that is the problem. OLD ANSWER (I think this is neat so I'll leave it here for now) I very likely misunderstood you. If, however, I understood correctly the question, it is something like: Suppose I have $y'(t)=f(y)$ and ask mathematica to solve it numerically. How do I inspect the values of $t$ used? The answer is to rig the ODE so that you can peek at the values mathematica evaluates, as follows: ````ClearAll[f]; upT = 2*Pi; f[y_?NumericQ, t_?NumericQ] := (Sow[{t, y}]; -Sin@y) ```` Here I define $f(y)=-\sin(y)$, but in a way that allows me to watch which values are passed to $f$ by `NDSolve` (the `Sow` bit). Then: ````points = (sol = NDSolve[ { y'[t] == f[y[t], t], y[0] \[Equal] 1 }, y, {t, 0, upT} ]) // Reap // Last // Last; ```` solves the ODE, collecting the values of $y$ and $t$ passed to $f$ by `NDSolve`. Then plot the solution, along with the points at which $f$ was evaluated: ````Plot[ y[t] /. sol, {t, 0, upT}, Epilog :> {Red, PointSize[.015], Point[points]}, ImageSize -> 640 ] ```` Not sure if I am answering the right question though. It's also interesting to look at the size of the steps taken: ````ListPlot[ Differences[points[[All, 1]]], ImageSize -> 640, BaseStyle -> FontSize -> 20, AxesLabel -> {"i", "t[i]-t[i-1]"} ] ```` so they are not all equal (not all $h$), sometimes the solver goes backwards etc. Of course this will depend on the solver you use. - You know, you could have used `EvaluationMonitor :> Sow[{t, y[t]}]` as an `NDSolve[]` option setting, don't you? – J. M.♦ May 19 '12 at 18:32 @J.M. yes, but I am not the most elegant of programmers :) – acl May 19 '12 at 18:35 Your syntax has me a bit confused. SetDelayed (:=) looks off to me. Typically one uses this to define functions. I think you want Equal (==) where you have the system of equations and Set (=) where you assign the value of the expression to "solution". Does this get you any closer (I replaced your "??" with "x")? ````solution = First@NDSolve[{ c'[t] == (2*Wb[c[t]])/(r[t]*Sin[theta])*(1 - C/gamma) + v/(Pi*r[t]^2)*(c[t] - x)/h, r'[t] == Wb[c[t]]/(gamma*Sin[theta]), c[0] == 0, r[0] == 0.15 }, {c, r}, {t, 0, 25900000}, Method -> "BDF"] ```` - Well, no, the problem is that I don't know what `x` should be. – Andrew May 19 '12 at 14:18 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9112802147865295, "perplexity_flag": "middle"}
http://motls.blogspot.com.es/2012/06/last-transit-of-venus-in-this-century.html?m=1
# The Reference Frame Our stringy Universe from a conservative viewpoint ## Saturday, June 02, 2012 ### The last transit of Venus in this century It may seem as though the 21st century is still just getting started. But there is one event that will occur on Tuesday night UTC and its next repetition will have to wait until the year 2117. And in the "hectic" 20th century, this event hasn't occurred at all. What is it? The Venus will perforate the Sun: the sky is falling. Paul Krugman proposes to build high-speed rails to fight against the effects of the perforation, to triple the U.S. public debt, and to increase the government, the only entity that is allowed to spend big. It's the rarest astronomical event among the predictable ones: the transit of Venus. The 2012 transit of Venus will begin on Tuesday, June 5th, 2012, at 18:09 Boston Daylight Savings Time (or 6 hours later according to the Pilsner Summer Time) and will last for 6 hours 40 minutes or so. Incidentally, right in the middle of the transit of Venus, the IPv6 Internet Protocol will be launched and good enough internet services will become available both in IPv6 and IPv4 permanently – for many years in which both protocols will co-exist. Some common sense disclaimer: Europeans will see almost nothing because the Sun can't be seen at night! ;-) Only an (extended/reduced) hemisphere will see (the-whole/at-least-part-of-the) transit of Venus. You surely want a no-nonsense list of the "recent" years when it has occurred or will occur and here it is: 1639; 1761, 1769; 1874, 1882; 2004, 2012; 2117, 2125... The rare event has been useful to determine the solar parallax (8.8 arc seconds) and therefore the distance between the Earth and the Sun. To show how it was done, let me repost my answer at Physics Stack Exchange. Edmund Halley's method requires one to measure the timing of the beginning of the transit and the end of the transit; both pieces of data have to be measured at two places of the Earth's globe whose locations must be known. The picture by Vermeer, Duckysmokton, Ilia shows that the two places on Earth have differing locations in two different directions (the differences in the distance from the Sun and Venus are too small to be measurable): one of them is parallel to the direction of the transit of Venus and will be reflected in the overall shift of the timing; the other component is transverse to it and it will actually shift the line along which Venus moves and crosses the Sun in the up/down direction i.e. it will make the duration of the transit longer. Each of these pieces of data – overall shift in the timing arising from one coordinate's difference between the two terrestrial locations – and the difference between the length of the transit – due to the other coordinate – are in principle enough to determine the solar parallax. Because synchronization of clocks at very different locations was difficult centuries ago, I suppose that the latter – the difference between $$\Delta t_1$$ and $$\Delta t_2$$ – was probably more useful historically. But we're talking about $$O(10)$$ minutes differences in both quantities. At any rate, Halley didn't live to see a proper measurement (the transit occurs about twice a century and the two events are clumped together with a 8-year break in between). The best he could get was 45 angular seconds for the parallax; the right answer is about 8.8 seconds. He knew that his result was very inaccurate. Note that the solar parallax is the angle at which the Earth's radius is seen from the Sun, i.e. the difference in the rays needed to observe the Sun from the Earth's center and/or a point on the Earth disk's surface. When you convert 8.8 angular seconds to radians, i.e. multiply by $$1/3,600\times \pi/180$$, you get $$4.3\times 10^{-5}$$. Now, divide 6378 km by this small number to get about 150 million km for the AU. Some orders-of-magnitude estimate for the numbers. Venus orbits at 0.7 AU so it's actually closer to the Earth during the transit. It means that a shift by 6,000 km up/down on the Earth's side corresponds to about 12,000 km up/down on the Sun's side. So the two horizontal lines crossing the Sun on the picture may be separated by about 12,000 km. Compare it with the solar radius near 700,000 km: you may see that we're shifting the horizontal lines by about 1% of the Sun's radius and the relative difference between $$\Delta t_1$$ and $$\Delta t_2$$ will be comparable to 1%, too. The last transit in 2004 took about 6 hours so the difference in the duration at various places is of order 10 minutes. The 2012 transit of Venus on Tuesday night UTC will take over 6 hours, too; the timing and duration differs by about 7 minutes depending on the location, too. If you've been dreaming about observing the transit of Venus, don't forget about Tuesday 22:49 night UTC; the following transit will occur in 2117. Amelia Earhart's mystery may be solved Some piece of early 20th century cosmetics – to suppress frickles, something she hated about herself – was found on an island in the Republic of Kiribati, Central Pacific Ocean. It seems likely that she landed there 75 years ago (during her attempt to fly around the globe), sent lots of radio messages, they were ignored as "not credible" by the ships around because this invalid description of the signals was convenient for the lazy folks on these ships (if you're a hero, you shouldn't rely on mediocre bastards), and she died on the island later. Unless I am wrong, this hypothesis predicts that there must be a defunct aircraft somewhere around the island. Toxique Girls feat. Ewa Farna ## Who is Lumo? Luboš Motl Pilsen, Czech Republic View my complete profile ← by date
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490887522697449, "perplexity_flag": "middle"}
http://nrich.maths.org/2322&part=
### All in the Mind Imagine you are suspending a cube from one vertex (corner) and allowing it to hang freely. Now imagine you are lowering it into water until it is exactly half submerged. What shape does the surface of the water make around the cube? ### Painting Cubes Imagine you have six different colours of paint. You paint a cube using a different colour for each of the six faces. How many different cubes can be painted using the same set of six colours? ### Tic Tac Toe In the game of Noughts and Crosses there are 8 distinct winning lines. How many distinct winning lines are there in a game played on a 3 by 3 by 3 board, with 27 cells? # Painted Cube ##### Stage: 3 Challenge Level: Imagine a large cube made up from $27$ small red cubes. Imagine dipping the large cube into a pot of yellow paint so the whole outer surface is covered, and then breaking the cube up into its small cubes. This text is usually replaced by the Flash movie. How many of the small cubes will have yellow paint on their faces? Will they all look the same? Now imagine doing the same with other cubes made up from small red cubes. What can you say about the number of small cubes with yellow paint on?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9584306478500366, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/markov-process
# Tagged Questions A stochastic process satisfying the Markov property: the distribution of the future states given the value of the current state does not depend on the past states. Use this tag for general state space processes (both discrete and continuous times); use (markov-chains) for countable state space ... 1answer 55 views ### Canonical Markov Process Let $X$ be a canonical, right-continuous Markov process with values in a Polish state space $E$, equipped with Borel-$\sigma$-algebra $\mathcal{E}$ and we assume that $t\rightarrow E_{X_{t}}f(X_{s})$ ... 1answer 37 views ### three-state Markov chain a male and a female go to a 2-table restaurant on the same day. each day the male sits at one or the other of the 2 tables, starting at the table 1, with a Markov chain transition matrix: ... 1answer 45 views ### Rewriting Markov process Let $X$ be a Markov proces with state space $(E,\mathcal{E})$with initial distribution $\nu$ and transition function $P_{t}$, so $$E_{\nu}(f(X_{t+s})\mid\mathcal{F}_{s})=P_{t}f(X_{s})$$ Suppose that ... 1answer 25 views ### Question on Markov chains of expected number of states I am confused with an statement from my probability book that has to do with Markov chains. I hope someone could clarify that, if possible....Consider a Markov chain for which $P_{11}=1$ and ... 2answers 37 views ### Diffusion process. Distribution vs transition probability. I need confirmation on the following problem: Take a SDE of the form: \begin{equation} dX_t=a(X_t,t)dt+b(X_t,t)dW_t \end{equation} where all the conditions, such that the solution $X_t$ is defined ... 1answer 30 views ### General State Space Markov Chain I am having some difficulty understanding some early results of Markov Chain theory on a general state space. We have a function (Kernel) $K:E \times E \rightarrow \mathbb{R}$, and a distribution ... 0answers 20 views ### Variability in estimations over a non-ergodic/non-regular Markov process Imagine we have a non-ergodic/non-regular Markov Process with with $n$ states. Among these $n$ states, there are $k$ absorbing states. For each of the $n-k$ non-absorbing states, it is not possible ... 1answer 42 views ### Identity in Markov Processes I want to know if my reasoning here is correct, it seems simple enough but I just want clarification (I am considering the proof that if a Markov process satisfies the detailed balance condition, then ... 0answers 26 views ### Probability - How to calculate Xt by time t A large group of cars are waiting for a car wash, and the line is infinitely long, always a car in line waiting to be washed, and that the staff can only tend to 1 car at a time. The business has 1 ... 0answers 58 views ### Markov Chains Worked Example (Stirzaker) I have a Markov Chain with state space the non-negative integers. The rules of the M.C. are that when it is in state $i \neq 0$, it moves to one of {${0,1,2,\ldots,i+1}$} with probability $1/(i+2)$ ... 0answers 21 views ### On discrete-time stochastic attractivity of linear systems Let $m$ be a probability measure on $Y \subseteq \mathbb{R}^p$, so that $m(Y)=1$. Consider a continuous function $f: \mathbb{R}^n \rightarrow \mathbb{R}^p$. Assume that $f(0) = 0$, and that there ... 1answer 32 views ### Have there some discrete-time continuous-state Markov processes been studied? I have seen discrete-time discrete-state Markov processes (such as random walks), continuous-time discrete-state Markov processes (such as Poisson processes), and continuous-time continuous-state ... 0answers 36 views ### Continuous time markov chain Jobs arrive at a central computer according to a $PP(\lambda)$. The job processing times are i.i.d. $\exp(\mu)$. The computer processes them one at a time in the order of arrival. The computer is ... 1answer 46 views ### On discrete-time stochastic attractivity Let $m$ be a probability measure on $Y \subseteq \mathbb{R}^p$, so that $m(Y)=1$. Consider a function $f: \mathbb{R}^n \times Y \rightarrow \mathbb{R}^n$, continuous on the first arguments, ... 2answers 75 views ### Probability of Extinction in a simple Birth and Death Process We are asked to show that the probability of extinction $\zeta=\lim_{t\to \infty} P\left(X(t)=0\right)$ given by: \zeta=\begin{cases}1&\text{if }\lambda\le \mu,\\ \left(\frac \mu\lambda ... 1answer 41 views ### Metropolis Hastings definition - Proving $\pi(x)$ is the invariant density of our transition matrix I'm currently working through the proof of the Metropolis-Hastings algorithm, and using two sources: page 328, section 3 page 1704-1705 I have a good understanding of most of the proof until ... 0answers 23 views ### Single evaluation for using exponential sampling until past a point I am trying to improve an algorithm that looks like the following (and am getting stumped): I am provided with a starting time, rate, and a target time. I then use an exponential distribution to ... 0answers 83 views ### Why Markov matrices always have 1 as an eigenvalue Also called stochastic matrix. Let $A=[a_{ij}]$ - matrix over $\mathbb{R}$ $0\le a_{ij} \le 1 \forall i,j$ $\sum_{j}a_{ij}=1 \forall i$ i.e the sum along each column of $A$ is 1. I ... 1answer 38 views ### Hitting times of Markov chain/process have always finite moments? Consider an irreducible ergodic Markov chain on a finite state space $\Omega$. Then any state is positive recurrent and this should suffice to conclude that the mean hitting time of state \$s \in ... 2answers 42 views ### Specifying differential equation that describes a particular set of dynamics. There are $S$ individuals who are susceptible to infection, and $I$ who are infectious. $S + I = N$, where $N$ is the total size of the population. Each infectious transmit the disease to a ... 1answer 51 views ### Showing a process is not markov I keep searching but I can't find any place that gives a good method of showing a process is NOT Markov. The definition I am using is that for every $s<t$ and $g$ bounded borel there is $f$ borel ... 1answer 32 views ### Amount of information a hidden state can convey (HMM) In this paper (Products of Hidden Markov Models, http://www.cs.toronto.edu/~hinton/absps/aistats_2001.pdf), the authors say that: The hidden state of a single HMM can only convey log K bits of ... 2answers 35 views ### How do you explain $f(x_4|x_3)f(x_3|x_2)f(x_2|x_1)f(x_1) = f(x_4,x_3,x_2,x_1)$? Let $x_1=x(n_1)$, $x_2=x(n_2)$, $x_3=x(n_3)$ and $x_4=x(n_4)$ be random Markov processes $(n_1 < n_2 < n_3 < n_4)$. I don't understand the identity given below on their probability density ... 2answers 83 views ### Is first order moving average a Markov process? Given first order moving average $$x(n) = e(n) + ce(n-1)$$ where $e(n)$ is a sequence of Gaussian random variables with zero mean and unit variance which are independent of each other, and $c$ is ... 0answers 105 views ### Is there monotone class theorem used in one of these steps? IN Rogers & Williams "Diffusions, Markov Process and Martingales" they introduce the resolvent as: $$R_\lambda f(x):=\int_{[0,\infty)}e^{-\lambda t}P_tf(x)dt=\int_ER_\lambda(x,dy)f(y)$$ where ... 1answer 27 views ### A book on finite state continuous time Markov chain I want to read in detail about finite state continuous time Markov chain. Can anybody suggest a book which deal this topic in detail? 1answer 49 views ### Random Process derived from Markov process I have a query on a Random process derived from Markov process. I have stuck in this problem for more than 2 weeks. Let $r(t)$ be a finite-state Markov jump process described by ... 2answers 94 views ### Expected value of stochastic process I have the following problem: $X_1,X_2,...$ are positive identically distributed random variables with the distribution function $F(x) :=P(X_n \leq x)$ and we assume that $F(0)<1$ for all $n$. Let ... 0answers 47 views ### Conditional distributions of (higher-order) autoregressive Markov processes If we specify an $p$-th order autoregressive process in discrete time by its transition distribution $F_{t|t-1,\ldots,t-p}$, what can be said about lower order conditional distribution where we ... 2answers 37 views ### Question on MIT Markov Matrices video Markov matrices are pretty new to me and I'm a little rusty with my linear algebra. My question stems from watching this video from YouTube on Markov matrices. For those who wish to skip the video, ... 1answer 33 views ### Showing a certain random process is a Markov Process I have the following example of a random process: A person has two houses, house A and house B in which he can stay, we denote by $X_{i}\in\left\{ A,B\right\}$ the house he stayed in on the i-th day ... 2answers 39 views ### How to prove the existence of the limit of Markov transition matrix? Does the limit of a Markov transition matrix $M$: $$\lim_{n\to\infty}M^n$$ always exist? And if yes, how to prove it? 1answer 192 views ### Motivation of Feynman-Kac formula and its relation to Kolmogorov backward/forward equations? Kolmogorov backward/forward equations are pdes, derived for the semigroups constructed from the Markov transition kernels. Feynman-Kac formula is also a pde corresponding to a stochastic process ... 3answers 229 views ### Finding the transition probability matrix, two switches either on or off.. Each of two switches is either on or off during a day. On day n, each switch will independently be on with probability [1+number of on switches during day n-1]/4 For instance, if both switches are on ... 1answer 74 views ### Markov Process with Stationary Distribution I have the following problem: If I have a markov process with stationary distribution. The state space for the MP is integers. I also know that $P_{i,j}>0$ for all i and j. It is also given that ... 0answers 35 views ### Markov Model Brainteaser An orangutan and a chimpanzee each sit at a computer typing 1 character per second. The orangutan chooses each character independently from $S$ with probability $\frac{1}{27}$. The chimpanzee follows ... 0answers 53 views ### A different Markov property definition In Shreve's Stochastic Calculus in Finance, the Markov property is defined as Definition 2.3.6. Let $(\Omega,\mathcal F,P)$ be a probability space, let $T$ be a fixed positive number, and let ... 0answers 82 views ### What are the definitions of a diffusion process and a jump process? I have seen following different definitions of a diffusion process and of a jump process. I was wondering how they are actually defined? Also are diffusion processes and jump processes necessarily ... 0answers 32 views ### Is HMM discriminative or generative? Wikipedia "An HMM can be considered as the simplest dynamic Bayesian network." Here. "In probability and statistics, a generative model is a model for randomly generating observable data, ... 0answers 54 views ### How to prove ergodic property from aperiodicity and positive recurrence How to prove that in case of an irreducible, aperiodic and positive recurrent Markov Chain time average along sample paths is equal to the ensemble average ? i.e. \lim_{n\to \infty ... 1answer 125 views ### Multidimensional infinitesimal generator of a jump-diffusion Let $X=\{X_t\}_{t\geq0}$ be an $n$-dimensional Markov process, defined by the SDE $$dX_t = \mu(t, X_t) \, dt + \sigma(t,X_t) \, dB_t+\beta(t-,X_{t-}) \, dN_t,$$ where $\mu, \sigma$ and $\beta$ are ... 1answer 113 views ### Markov Chain Transition Intensity Conversion I have a question about converting a 3-state discrete state, continuous-time, markov chain to a 2-state. My 3-state model has states: Well (state 1), Ill (state 2) and Dead (state 3). ... 0answers 76 views ### Question on Conditional expectation Let $X_1$ and $X_2$ be two random variables on $(\Omega,\mathcal{B},P)$. Suppose there is a function $g:\mathcal{B}\times\mathbb{R}\rightarrow[0,1]$ such that for any $x$, $g(\cdot,x)$ is a ... 2answers 34 views ### Different limiting distributions but they both satisfy same equations I needed to find the limiting distribution of the matrix $$\pmatrix{ 0 & 0 & 1 \\ 1 & 0 & 0 \\ \frac{1}{2} & \frac{1}{2} & 0}$$ Instead of $\pi$ I'll use $A, B$ and $C$ ... 0answers 37 views ### HMM - how to calculate p(x[t]=i)? Suppose we have an HMM, where $y_t$ are the observations and $x_t$ are the latent states: $p(y_1,\ldots,y_T,x_1,\ldots,x_T) = p(x_1)\prod_{t=1}^Tp(x_{t+1}|x_t)p(y_t|x_t)$ Suppose we already used the ... 1answer 46 views ### HMM as special case of MRF I have learned that any Hidden Markov Model (HMM) can be described as a special case of a Markov Random Field (MRF) model. However, AFAIK, the dependencies in a HMM are directed, while the ... 1answer 63 views ### Markov Process: Show that the minimum time taken to get back to state $1$ is $(0.5)^{k-1}$ Suppose that the chain is intitially in state $1$, i.e $P(X_0 = 1) = 1$. Let $\tau$ denote the time of first returen to state $1$, i.e $$\tau = \min\{n > 0: X_N = 1\}.$$ Show that ... 1answer 51 views ### geometric sum - weighted random walk I am trying to model the following sum: $\sum_{i=0}^{n}{W_i \alpha^{i}}$ where $\alpha \in[0, 1)$ and $W_n$ takes values 0 or 1 and may be modeled as a markow chain or for simplicity as a binary ... 1answer 66 views ### Is this Markov? Consider a process $\{X_n, n\geq 0\}$ with state space $S=\{0,1,2\}$ s.t. P(X_{n+1}=j | X_n=i, X_{n-1}=i_{n-1}, \dots, X_0=i_0)=\begin{cases} P_{ij}^I \ \ \ n \ \mbox{ even},\\ P_{ij}^{II} \ \ \ ... 1answer 58 views ### Fixed point of transition kernel generates martingale Let $P^{h}, h \geqslant 0$ be a transition kernel for some homogenous Markov process $X_t$, $\mathbb{E}|X_t|<\infty$: $$P_{X_{t+h},X_t}(A,B) = \int\limits_{A}P^h(x,B)P_{X_t}(dx)$$ where ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 100, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9005563259124756, "perplexity_flag": "middle"}
http://ibyea.wordpress.com/tag/astronomy/
# IBY’s Island Universe ## Waves and Rings May 16, 2013 On Earth, one can indirectly find what the structure of inside the planet is by measuring the waves created by an earthquake. The Earth’s interior, having layers with different compositions, will refract and reflect those waves, and by measuring the wave all over the Earth, what can make a reasonable assumption as to what the Earth is like inside it. Unfortunately, we can’t exactly place seismographs in other planets. In the case of Saturn, though, there is a structure you can measure which will indirectly tell us what is going on inside the planet. It is the rings, which it turns out that while its shape is predominantly affected by Saturn’s moons, they alone don’t account for all the waves on it. The planet itself affects the rings, and one of the findings is that the inside of the planet is sloshing around. More details is in the link above. Posted by ibyea ## Atmospheric Composition of Extrasolar Planets Detected March 16, 2013 I didn’t think I would live to see the day, but they did it. Obviously I have underestimated way too much the capabilities of our current technology ^_^ . The planets did have the benefit of being far away from their parents star, and they are huge, but still, it is quiet the accomplishment. I also found a science paper (which I found thanks to this) about the spectroscopy of the planets, if you want to read it (beware, not for your average joe). Yeah, not much to say about this one, the links will tell you everything. Posted by ibyea ## New Horizons For Extrasolar Planet Discoveries February 21, 2013 There is exciting news for extrasolar planet enthusiasts. A planet smaller than Mercury has been discovered around a regular star, one similar to the sun. This is another excellent discovery done with the already very productive space telescope Kepler. The discovery was helped by the fact that the planet rotated very close to the star. After all, an astronomer needs to detect at least three signals in order to confirm a planet, and finding a planet that comes in front of the star from Earth’s view is more probable the closer it is. The latter is important because Kepler finds planets by looking at a dip in the star’s brightness caused by the plane moving in front of it. Now, is it the smallest planet discovered ever? Possibly not, it is probably one of the planets of a pulsar system. But it kind of isn’t fair, since pulsars have a very regular rotation period, which one can measure because it sends out jets of lights that crosses the Earth everytime it rotates. One can use discrepancies in the rotational period to detect planets that are very small in mass. For the transit method, though, this is very good. It means we are well on our way to discovering rocky planets in habitable zones. We just need to observe a lot longer. Three years for an Earth sized object that goes around in one year.  And we get even more variety in our discoveries, instead of just gas giants and superearths, which have been dominating discoveries because finding bigger things is easier. Posted by ibyea ## A Guide to Kepler’s First Law January 24, 2013 Kepler’s first law states an object orbits another object under a gravitational pull in a conic section orbit. If the orbit is closed, it is an ellipse. It also turns out to be really hard to prove. You either have to use calculus and differential equations, or use geometry with lines and stuff and know well the properties of the ellipse. Either way, you have to set up the problems in creative ways.  In this post, I would like to collect all the ways Kepler’s first law can be derived. I have always found it annoying how scattered the proofs were, and I would like to leave this behind for anyone who is itching to find out how Newton’s laws implies elliptical orbit and vice versa. For a newbie, the best proof out there is in my opinion Feynman’s geometric proof. While it is still complicated, it is not as hard as the other proofs to understand. You don’t have to know anything too complicated except for some of the properties of the ellipse, and get used to the methods of geometry. It is also great for its clarity, unlike Newton’s geometric proof. Newton’s proof is convoluted and use really complicated geometry, but if you would like to know how the master himself did it, there it is. The most common derivation is the differential equation approach. It is the standard textbook approach, and if you know something about calculus and differential equations, it is easier to swallow. Then there is the more complicated version which takes account of the fact that two move around a center of mass, instead of one around the other. My favorite version, though, is the one that uses the Laplace-Runge-Lenz vector. Its derivation is elegantly simple, following directly from the $m\vec{a}=\frac{mMG}{r^2} \hat{r}$ approach. In the other differential equation method, you have to find the acceleration in terms of polar coordinates and then do a creative substitution that makes them end up as a simple second order differential equation. This one is somewhat less convoluted than that, and once you get the vector, you are only one step away from Kepler’s first law. In The Mechanical Universe and Beyond, video 22 titled The Kepler Problem uses this derivation. Finally, I know there is the one that uses the complex function. Unfortunately, I can’t find it online. It is contained in this book, though. If anyone knows of other alternatives, I can post it here. Posted by ibyea ## Good Kepler News February 4, 2011 Firstly, the Kepler space telescope discovered a weird, compact planetary system composed of six planets, which you can read about it in here. Secondly, and this is the best of all news, they have discovered over a thousand candidates of stars harboring planets. Over the next few years, expect the number of planets discovered to increase dramatically. hat tip: badastronomy and io9 Posted by ibyea ## Do You Want to Hunt Planets? February 2, 2011 Recently, I don’t know how long ago, though, the Zoo Universe project, which tries to involve citizens in helping out the professional astronomers sort through data, have added a new project to its list. It is called Planet Hunters. What it does is, it gathers the light curve data of stars (basically, the star’s brightness through time) from the space telescope Kepler and allows us to look at them. The basic premise is that stars have planets (well, duh), and some of those stars might have planets that orbit right in front of the star from our point of view. Those planets block some of the light from the star, thereby dimming it. By looking at the change in brightness in the curve, mainly the dipping of brightness at certain moments in time, one can detect planets, as shown in the picture below: Of course, things aren’t as simple as that. As you will find out from checking out the web page and the tutorial, data is full of noise. The team behind this project, though, believe that because the human brain is so effective at noticing patterns, that we might be better at detecting these dips in between all of the noise than the machines. Maybe. Anyways, go ahead and try! Who knows, maybe you might discover a planet. Posted by ibyea ## Interview on Extrasolar Planet January 9, 2011 This is so last year, but I want to post this for the interest of general education. The reason I am posting it this late is because I forgot, but now that I remembered, here it is. The reason I am posting this is that in astronomy, the search for extrasolar planet is more relevant than ever. Better and better technologies like the Kepler space telescope are being used to probe the vast expanses of our galaxy in search of habitable planets. The e-mail interview below is one I did for my English research report for college, but I believe you may find it of benefit too. The topic is on the method of searching extrasolar planets and some of the discoveries astronomers have made. The one being interviewed is Christine Pulliam, public affairs specialist from the Harvard-Smithsonian Center for Astrophysics, to whom I am very thankful for spending some of her probably precious time answering my request and allowing me to post this. I hope you enjoy it: Read the rest of this entry » 1 Comment | science | Tagged: astronomy, exoplanet, interview | Permalink Posted by ibyea ## Bad Universe Episode 2 Review October 7, 2010 This is my first review of the science show Bad Universe. I missed the pilot, so I am reviewing episode 2. That is too bad because the first episode was about asteroid impact, which I think is cooler and has a much better ground in reality than tonight’s episode: Alien Attack. Here is a teaser, which the host of the show himself, which the badastronomer Phil Plait was generous to post on his own blog: So, the first thing I want to comment on is the host Phil Plait. And just as I expected, he was awesome. The explanations were simple, as expected, but the delivery of them is really good, with neat graphics and visuals backing them up. Also, he injects a nice amount of humor. I thought the random cutout to the steak scene was quiet funny while explaining that most living things on Earth uses sunpower. Mostly, though, I think it is his personality and enthusiasm which makes the show enjoyable to watch. Today’s episode was kind of cheesy, especially the initial invasion scene. That’s okay, though. The theme of the episode was alien attacks. I really liked the flying saucers lasers destroying things scene, which were very reminsicent of Independence Day. The robot, though, was very lame, with its very crummy design. By the way, since this is my first time watching the show, I would like to compliment the comic book style presentation. It is very unique, and I especially love it when they cartoonize the various people Phil Plait is meeting with. Also, the comic book style presentation was really effective when it came to presenting the infectious bacteria from outer space. A live action shot would have probably shown some boring blur of bodies covered with sheets, or other sorts of boring stock footages that these kind of shows like to bring up. And while some of the annoying repetition, like the freaking alien footages, was here in this show, I think that the show’s presentational style kind of balanced it. As an example of the comic book style presentation, look at the intro of the latter half of this clip: The science itself was mostly good. The show was mostly devoted to answering the probability that alien life could come on our planet. So naturally, the first thing that was presented was the Drake equation, which estimates the probability intelligent life might exist in the galaxy. I thought it had a really neat presentation. It was basically a walkthrough of each variable along with the snazzy graphic showing the letters in a floating 3d look. At the end, he explained that it was all a guess, which I am glad he did. Although I don’t think he should have stuck with 20. Maybe he should have mentioned a range because what the viewers could take from that is that the number is a fact. Afterwards, Phil showed what it would take for aliens to travel the vast interstellar distances. Basically, one would have to accelerate so much for so long that one would probably throw up one’s stomach out after the first few days in the trip, as Phil’s nauseated look showed after having endured over 4g’s of force in the jet plane. Although this brings up a question. Can’t they just accelerate in spurt? Since space is pure vacumn, there is no air friction that slows it down. So according to Newton’s first law, once you speed up, you just keep going and going. Of course, then the spacecraft would have to slow down, and as you see, the whole enterprise sounds like a mess. Unfortunately, the show didn’t mention the ultimate obstacle of space faring aliens: the speed of light, the speed at which no matter shall travel. At a certain point, no matter how much energy you dump into the ship, it would only get closer to the speed of light, never get there. But then, it is a 45 minute show, and there is only so many things you can put in there, so all is forgiven. My two favorite segments came afterwards. The first one was an experiment trying to show whether e. colis could survive an impact if they came riding on an asteroid into Earth. They did it by putting a solution of bacterias inside a metal ball and shooting it in a long air gun towards a pile of sand. The poor blobs didn’t make it, unfortunately. So, the ruling of alien bacterias arriving on Earth is almost nil. While a lot of bacteria can survive space and radiation, whether they can survive being sent into space after an impact in another planet, and then surviving the crash on Earth is a whole another story. The other cool part was the cave exploration. They were making the point that life doesn’t have to be like the way we know it, so an extreme planet could support life. They made their point by citing extremophiles, which are bacterias that survive extreme conditions. In the cave, there were no sunlight, yet bacterias thrived by metabolizing minerals on rocks. They managed to scrape some and show them under a microscope. Very cool. As for the martian rock thing, it was kind of meh. While I agree that the chance of Mars having had life back in the really old days (as in billions of years ago), I don’t think the patterns on the rock is it. Granted, I was impressed with the patterns on the rock, since I didn’t know how weird rocks were microscopically. But it reminds me too much of the previous life on martian meteorite hype in the 90′s. Well, I think it was a hype. If anyone out there is an astronomer, what do you think? As he says in the end, we need more serious study on this subject. Finally, there was the replicating robot kills everything scenario at the end. While the scenario is science fiction, I have got to admit, it is quiet compelling and really cool. It is my favorite scenario, and no, I am not sadistic (c’mon, they are replicating robots!). He placed this as one of the more probable one because these are machines, and they can endure the coldness and harshness of space, and grab resources to make more of themselves. In the end, he summarizes the whole thing this way: “We just don’t know.” And in the end, that is the best answer there is, and the best way to end the program. Posted by ibyea ## Planet Smashing Discoveries August 30, 2010 (hat tip from Universe Today for everything below) This is an exciting time for planetary discoveries. Not only has the Kepler mission been launched, a whole batch of super Earths to neptune planets are being discovered. That is a far cry from a few years ago, when most planets that were being discovered were giant sized gas planets that were the likes of Saturn and Jupiter. Many of them were found extremely close to their star, closer than Mercury is to the sun. In a way, giant planet discoveries are still the case, but smaller and smaller planets are becoming easier to discover. Take the case of these two star system discoveriesz: one by ESO with at least 5 planets, and at most 7, most of them Neptune sized. The smallest planet could possibly be 1.4 times the mass of the Earth. The one by Kepler, by using a system in which it detects the dimming of a star by the planets orbiting in front of it, discovered a system of two Saturn sized planets and one possible 1.5 Earth mass planet. At this rate, an Earth sized planet discovery is possible within a few years, although note, both super Earths are not confirmed yet. But still, one can hope. You know what the most unfortunate aspect of this is, though? The distance in space is so large that not even a space probe could be sent in those places to investigate and snap pictures (my favorite part of a planetary survey). Even if it were possible, it would take hundreds, if not, thousands of years for the probe to get there, and send back the data. Oh, and remember, depending on which stars you are talking about, it takes light around decades to centuries to reach the Earth from those places. Light is too slow, darn it! If you want to keep up with the planet discoveries, you should get this iphone exoplanet catalogue app. Otherwise, you might go to the next best thing, which is this catalogue, by the same creator of the app. Posted by ibyea ## Spaghettification, Calorification, Carbonization August 28, 2010 Ok, Sylvester McCoy’s era of Doctor Who reference aside, this post will be about a horrible process of death via black hole called spaghettification. That’s right, spaghettification is actually the official word for what happens to stuff that goes in a black hole. This post has been inspired by this Let’s Play Mario Galaxy video by chuggaconroy, who by the way, makes great videogame walkthrough videos. In it, he tries to explain what spaghettification is, starting from 7:15: Although his explanation of spaghettification sounds awesome, it is incorrect. Of course, he is not a physics expert, so he gets major parts of the process wrong. That’s okay, though. Not everyone can be a physicist. I am not one either, but I understand it enough. If someone out there knows better, feel free to correct me. If you feel like my explanation is too much, then you can just watch the fun explanation of dismemberment  by Neil de Grasse Tyson below. Read the rest of this entry »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9606401324272156, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/5146/why-is-a-non-fixed-length-encryption-scheme-worse-than-a-fixed-length-one
# Why is a non fixed-length encryption scheme worse than a fixed-length one? I have the following definition (highlights by me): An (efficient secret-key) encryption scheme $(Gen,Enc,Dec)$, where $Gen$ and $Enc$ are PPT algorithms and $Dec$ is a Deterministic Polytime Algorithm, is one-time computationally-secret if for any PPT adversary $A$ it holds that the probability of the following experiment is negligibly biased from $1/2$: 1. The adversary outputs two messages $m_0$ and $m_1$ of the same length. 2. Let $k \leftarrow Gen(1^n)$ and let $b \in {0,1}$ be chosen uniformly at random. $c \leftarrow Enc_k(m_b)$ is computed and given to $A$ 3. A outputs the bit $b'$ 4. The output of the experiment is 1 $\iff b'=b$ I wonder why $m_0$ and $m_1$ should be of the same length, why is better than considering a non-fixed length encryption scheme? The only drawback that comes to my mind is that if you are able to know something about the encryption of two messages of different length before they are actually encrypted (note that the key is generated in Step 2) then it would justify the choice of fixed-lenght. But I cannot think of a reason why it must be so. Any ideas? - ## 2 Answers You had your finger on it, you do know something about the encryption of two messages of different length before they are actually encrypted: the length of the corresponding ciphertexts. If the setting in which you're using your encryption scheme allows for a maximum message length then you can always pad to make every ciphertext the same size (!inneficient!) but otherwise, since encryption is supposed to be a invertible, longer plaintext means longer ciphertext. This means that you would have a trivial distinguisher in the security definition if you allowed messages of different length. This ended up being longer that I intended it, to be but to sum it up: If you want to make sure you understood you should try to convince yourself that CBC encryption is very distinguishable in the security definition you want. - I am not sure about the part of the length of the corresponding ciphertext: note that in the Definition I gave above the Encryption algorithm $Enc$ is a PPT algorithm, therefore we know that $\forall n$ (security parameter) $\forall m$, $||Enc(m)|| < l(||k||+||m||)$, where $l(x) \in \mathbb{Z}[X]$, so we know that the length (i.e. the number of bits, denoted by $||\cdot||$ ) is polynomially bounded, so I don't think we can say that we know the length of the corresponding ciphertext... – Alan Bletchley Oct 24 '12 at 19:10 @AlanBletchley Yes, there might be efficient secret-key encryption schemes according to your definition where the length of the ciphertext depends on the plaintext contents (not just it lengths), but you can prove that these can't be one-time computationally-secret (since the adversary then could use two plaintexts which give different ciphertext lengths, and distinguish them by this property). – Paŭlo Ebermann♦ Oct 24 '12 at 19:38 @AlanBletchley: In most practical schemes, the length of the ciphertext depends deterministically on the length of the plaintext, and there will exist at least two plaintexts that, with probability 1, map to ciphertexts with different lengths. You can certainly find counterexamples (Alexandre gives one based on padding). But defining the security of an algorithm in a way that prevents length information from leaking excludes many useful, otherwise secure algorithms. The security definition in your question is useful and "good enough" for most contexts. – Seth Oct 24 '12 at 19:40 The encryption scheme in the experiment you describe does not have to be fixed-length. We simply require that the two messages the adversary sends to its oracle have the same length. The restriction is on the adversary, not on the encryption algorithm. So why do we put this requirement on the adversary? The reason is that in every practical encryption scheme, ciphertexts leak information about the length of the corresponding plaintexts --- the longer the plaintext, the longer the ciphertext. So cryptographers basically threw their hands up in the air and said, "Fine, we'll just concede that length-information leaks. Can we make sure no other information leaks?" Requiring the lengths of $m_0$ and $m_1$ to be the same ensures that the adversary can't guess which one was enrypted simply by looking at the length of the ciphertext it gets back. Therefore the experiment you describe measures an adversary's ability to find some other way that information leaks. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92559814453125, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/185656/show-that-exists-delta-0-such-thatdx-y-geq-delta?answertab=active
# Show that exists $\delta >0$ such that:$d(x,y)\geq \delta$ This is the problem: Let $X$ be a metric space with metric $d$ and $K\subset X$ compact and $F\subset X$ closed and $K\cap F=\varnothing$. Let $x\in F,y\in K$. Prove that there exists $\delta>0$ such that for every $x\in F$ and $y\in K$, $d(x,y)> \delta.$ My idea was to use sequences,once I know that in a compact set we have a subsequence converging to an element in $K$ and how $F$ is closed there's a sequence converging to an element in $F$. But ,then I got stuck...I don't know what to do now.Any hint?Much appreciated! - ## 2 Answers Assume by contradiction, then for every $n$ you can find $x_n\in K, y_n\in F$ such that $d(x_n,y_n)<\frac1n$. Since $K$ is compact there is a convergent subsequence $x_{n_k}$, whose limit is $x$. By triangle inequality $$d(x,y_{n_k})\leq d(x,x_{n_k})+d(x_{n_k},y_{n_k})$$ Juggle $\varepsilon$'s a bit and derive that $x\in F$, and the contradiction. - I thought about triangle inequality...but I looked at it and seemed to be wrong...couldn't go forward.Thanks!!! – Charlie Aug 23 '12 at 0:53 I assume that you want to prove that there exists $\delta > 0$ such that for every $x\in F$ and $y\in K$, $d(x,y) > \delta$. (The order of quantifiers is opposite in your question, which makes it trivial.) Hint: prove that there is a sequence $(x_k, y_k)$ s.t. $d(x_k,y_k) < 1/k$. Find a converging subsequence $y_{k_i}$. Let $y_{k_i} \to y \in K$ as $i\to \infty$. Prove that $x_{k_i} \to y \in K$. Conclude that $K$ and $F$ are not disjoint. - Thanks a lot!It's really useful! – Charlie Aug 23 '12 at 0:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9651491045951843, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/homological-algebra+sheaf-theory
# Tagged Questions 0answers 75 views ### A question of extension of vector bundles. Fix $p \in \mathbb{P}^1$. Let $X=\mathbb{P}^1\times \mathbb{P}^1$, $C_1=\mathbb{P}^1\times \{p\}$ and $C_2=\{p\}\times \mathbb{P}^1$. Since \$\mathrm{Ext}^1(\mathcal{O}_{C_2},\mathcal{O}_{C_1})\cong ... 1answer 79 views ### Will $i^*$ pull back injectives to injectives? Let $i: Z \rightarrow X$ be a closed embedding. Need $i^*$ of an injective sheaf of abelian groups be injective? Need $i^*$ of a flabby sheaf be flabby? Thanks :D! Also (maybe should be a separate ... 1answer 104 views ### Is there a quasi-isomorphism between a complex of sheaves and its Godement resolutions? I have a doubt, I read somewhere that the Godement resolution of a sheaf $\mathcal{F}$ is a quasi-isomorphism $\mathcal{F} \rightarrow C^\bullet(\mathcal{F})$. Just right off the bat when I read ... 0answers 87 views ### Adapted classes of objects and left (right) exact functors I had a question about adapted classes of objects, I was confused by the definition and how it relates to left exact functors. Let $\mathcal{A}$ be an abelian category with enough injectives, let \$F: ... 0answers 136 views ### Composition of derived functors and comparison between hypercohomology and sheaf cohomology I had a few questions about compositions of derived functors, the comparison between hypercohomology, and sheaf cohomology and the following theorem from the Gelfand, Manin homological algebra book: ... 1answer 73 views ### Is it true, that $H^1(X,\mathcal{K}_{x_1,x_2})=0$? - The cohomology of the complex curve with a coefficient of the shaeaf of meromorphic functions… Let X be complex curve (complex manifold and $\dim X=1$). For $x_1,x_2\in X$ we define the sheaf $\mathcal{K}_{x_1,x_2}$(in complex topology) of meromorphic functions vanish at the points $x_1$ and ... 0answers 110 views ### Computing the hypercohomology of a complex of acyclic sheaves Let $K^{\bullet}$ be a cochain complex of sheaves of finite-dimensional vector spaces, I wanted to compute $\mathbb{H}^{\bullet}(X,K^{\bullet})$ = the hypercohomology of the complex $K^{\bullet}$, the ... 0answers 186 views ### Why didn't Cartan-Eilenberg develop homological algebra on sheaf theory? Cartan-Eilenberg created homological algebra on modules over rings. I wonder why they didn't develop it also on sheaves over ringed spaces. Grothendieck and Godement did that soon after(or almost at ... 1answer 153 views ### Flabby sheaves and comparison of topologies Let $A^p$ be a group of sheaves on a topological space $X$, let $F$ be the global sections functor $F(A^p) = A^p(X)$. I have to compute the cohomology of the complex \$0\rightarrow A^1(X) \rightarrow ... 1answer 270 views ### Confused about Hypercohomology terminology and meaning check this: Given a sheaf complex $F^\bullet$, let's say I want to compute the hypercohomology of this complex, if we consider the bicomplex of sheaves \$C^\bullet(F^\bullet) = (C^p(F^q))\quad ... 1answer 101 views ### Flabby sheaves and exact sequences of sheaves - Question about proof I was going through this proof from Rotman's 'Introduction to homological algebra' (Pages 381-382) and I just can't seem to make sense of it, am not super well-versed in this so I don't know if it's ... 1answer 114 views ### Question about proof that Flabby sheaves are acyclic Can anybody help me understand this proof? In Rotman's 'An introduction to homological algebra' in Proposition 6.75 (iii). Flabby sheaves $\mathcal{L}$ are acyclic (Page 381), in the proof it says ... 1answer 79 views ### Is the presheaf of continuous functions on a topological space a “complete presheaf”? Is the presheaf of continuous functions $f:A\rightarrow B$ from a topological space $A$ to another topological space $B$ a "complete presheaf"? Can't find this, anyone have a reference? 2answers 185 views ### Is the sheaf of locally constant functions flasque? Quick question, is the sheaf of locally constant functions flasque?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9065398573875427, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/107591-quadratic-equation-doesn-t-seem-factorise.html
# Thread: 1. ## this quadratic equation doesn't seem to factorise Hi guys, I've been trying to factorise the following equation: 4p^2 + 12p + 9 = 0 I can't seem to find any values for a and b so that 4a+b=12 and a*b=9 Any help on this? thanks 2. Originally Posted by portia Hi guys, I've been trying to factorise the following equation: 4p^2 + 12p + 9 = 0 I can't seem to find any values for a and b so that 4a+b=12 and a*b=9 Any help on this? thanks (2p+3)(2p+3) 3. Thanks, I do appreciate your answer, however, I'd like to know how you did it 4. Originally Posted by portia Thanks, I do appreciate your answer, however, I'd like to know how you did it I find that explaining methods of factoring is very tricky... basically I take advantage of the fact that I had it drilled into me and I am quite quick with multiplying in my head But here's my shot at it $4x^2+12x+9$ 4 has the following factors 1,2,2,4 So if this thing is factorable it has one of the two forms below 1) (x )(4x ) or 2) (2x )(2x ) So I just pick one and run with it, let's say we guess wrong Try (x )(4x ) Now 9 has the factors 1,3,3,9 so we can put 1,9 together in any order or 3,3 together Since 9 is positive, either both numbers are negative or both numbers are positive So our possible combinations are: (x+3)(4x+3), (x-3)(4x-3), (x+1)(4x+9), (x+9)(4x+1), (x-1)(4x-9), (x-9)(4x-1) I run through all these in my head and realize that I was not getting 12x out of the deal, so I move to the next one Try (2x )(2x ) Now for blank space we have exactly the same combinations as above, I run through them in my head (and by this I mean I say to myself (2x+1)(2x+9) will give me 20x, nope no good try again) and I get that (2x+3)(2x+3) is my answer hope this helps and by the way, this method is significantly quicker than setting up those equations and looking at it once you get good with this 5. Ok, I kind of understand it. I also used the quadratic formula and got -3/2 which makes sense: (x+3/2)(x+3/2) /// multiplied by 2 (2x+3)(2x+3) So in this case there's only one solution: x=-3/2 I'll need to practise it a lot more. I am ok when the coefficient 'a' is 1 as in x^2+4x........ but when 'a' is different to 1 then I sometimes struggle. thanks 6. you can of course always apply the quadratic formula When you apply it, you get 2 answers in general, call them $r_1$ and $r_2$ Then the quadratic can be written as $(x-r_1)(x-r_2)$ In your case $r_1=r_2=-\frac{3}{2}$ So if you were stuck on a test and had time this is a fail safe, but obviously factoring is preferred 7. The quick way to see whether a quadratic will factorise (at least until you're good doing in mentally) is to find the discriminant, $\Delta$ $\Delta = b^2-4ac$ If $\Delta$ is a perfect square it will factorise 8. yeah but you can end up with fractions as the factors in which case I would just use the quadratic formula anyway 9. Originally Posted by portia Hi guys, I've been trying to factorise the following equation: 4p^2 + 12p + 9 = 0 I can't seem to find any values for a and b so that 4a+b=12 and a*b=9 Any help on this? thanks this is a complex trinomal, this means you have to multiply the A value with the C value and find two numbers that add up to 12 and multiply to 36. In this case it will be 6p and 6p. From there to find the great common factor between the first two values and the next two values. If you did it right you will get one value twice in your work. This is how your work should work: $0 = 4p^2+12p+9$ $0 = 4p^2+6p+6p+9$ $0 = 2p(2p+3)+3(2p+3)$ $0 = (2p+3)(2p+3)$ $0 = (2p+3)^2$ Therefore $(2p+3)^2$ is $4p^2+12p+9=0$. P cannot equal -3 over 2. The root is -3 over 2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9606500864028931, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/tags/basic-knowledge/hot?filter=year
# Tag Info ## Hot answers tagged basic-knowledge 4 ### Which categories of cipher are semantically secure under a chosen-plaintext attack? Encryption using a block cypher such as AES by passing plaintext blocks directly to the encryption function is known as Electronic Code Book mode (ECB) and is not CPA secure as (as you say in your question) it is entirely deterministic and two identical plaintext blocks will result in two identical ciphertext blocks. To prevent this an initialisation ... 1 ### Which categories of cipher are semantically secure under a chosen-plaintext attack? To be secure against a chosen-plaintext attack, an encryption scheme must be non-deterministic — that is, its output must include a random element, so that e.g. encrypting the same plaintext twice will result in two different ciphertexts. Indeed, if that was not the case, an attacker could easily win the IND-CPA game just by using the encryption ... 1 ### Decimal to binary question [closed] In base 10 we write for example $133$ when we mean $$133 = 1 * 10^2 + 3*10^1 + 3*10^0.$$ If we want to write $49$ in base $2$ then note first that: $$49 = 1*2^5 + 1*2^4 + 0*2^3 + 0*2^2 + 0*2^1 + 1*2^0.$$ Because of this $49$ is $110001$. Noe obviously, you "don't know this", but I wanted to write it down so that you can see what happens as you divide by ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278448820114136, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/1557-physics-problem.html
# Thread: 1. ## A physics problem This physics problem appeared in my Calculus book: Given a rod with length $a$ it is hindged thus it can freely rotate. An object is attached to the end of the rod and it is brough all the way vertically up. Then a slight disturbance and it flys down (from equilibrium). How long does it take for it to come down to the minimum point? Just remember it travels in a circular motion. My book gave the answer: $\sqrt{\frac{a}{g}}\ln{(1+\sqrt{2})}$ Where $g$ I am guessing it the gravitational acceleration. 2. Originally Posted by ThePerfectHacker This physics problem appeared in my Calculus book: Given a rod with length $a$ it is hindged thus it can freely rotate. An object is attached to the end of the rod and it is brough all the way vertically up. Then a slight disturbance and it flys down (from equilibrium). How long does it take for it to come down to the minimum point? Just remember it travels in a circular motion. My book gave the answer: $\sqrt{\frac{a}{g}}\ln{(1+\sqrt{2})}$ Where $g$ I am guessing it the gravitational acceleration. It seems to me that there is something wrong here. It looks to me that as the disturbance goes to zero the time to fall goes to $\infty$. RonL 3. This is just like a free-falling problem from physics. Except this is one is in a circular path? Did you try to do the problem and got some unusual result? 4. Originally Posted by ThePerfectHacker This is just like a free-falling problem from physics. Except this is one is in a circular path? Did you try to do the problem and got some unusual result? What I did is derived the relevant DE for this problem (I was overcome with curiosity because physical intuition was that it should take a arbitrarily long time to fall from an unstable equilibrium as the initial disturbance becomes arbitrarily small). The DE can be solved but gives an answer in terms of for the (quarter) period in terms of an elliptic integral (not necessarily a problem in itself as we want a special case which could have in principle reduced to something more easily manageable). Examining the integral showed that there was a singularity at one end of the range of integration for the given initial condition, and that near the singularity the integral behaved like $1/x$, and so the integral should be divergent. So I did some research, and as far as I can tell (the research did not turn up this exact problem but something similar) it indicates that my initial RonL The nearest reference to this problem is in Morris Klein's "Calculus : An Intuitive and Physical Approach" http://www.amazon.com/gp/product/048...lance&n=283155 5. But this problem cannot have some kind of extremely complicated solution (elliptic integral) it appeared in my Calculus book for univeristies. Perhaps, I am thinking, this is somehow connected with the swinging pendulum? 6. Originally Posted by ThePerfectHacker But this problem cannot have some kind of extremely complicated solution (elliptic integral) it appeared in my Calculus book for univeristies. Perhaps, I am thinking, this is somehow connected with the swinging pendulum? An elliptic integral is not complicated in principal, its just that in general it does not have a closed form representation in terms of elementary functions. Morris Klein's book I referred to is an undergraduate calculus text. And yes it is a rigid pendulum (being rigid allows the thing to be started with the weight above the pivot without it just falling straight down initially). RonL 7. Do you know how to solve it? Because all my attempts failed. 8. There are two methods of obtaining the Differential Equation governing the motion of the bob of a (rigid)pendulum. (I will assume that the rod is "light" that is its mass is negligible compared to that of the bob) One is to just consider the energy of the bob. The total energy is: $TE=PE+KE$, where $TE$ is the total energy, $PE$ is the potential energy and $KE$ is the kinetic energy. (another is to resolve the force due to gravity on the bob into components along the rod and perpendicularly to the rod, the perp component gives the torque which gives rise to angular acceleration, and the other component is cancelled by the tension/compression forces in the rod and the reaction at the pivot) Now we will describe the position of the bob by a single angle $\theta$ (see the diagram). Take the reference height for the zero of potential energy to be the height of the bob when $\theta=0$, then: $PE=m.g.(a-a\cos(\theta))$, and: $KE=\frac{1}{2}I\dot \theta ^2$, where $I=a^2m$ is the moment of inertia (assuming a light rod) So: $m.g.a(1-\cos(\theta_0))=m.g.a(1-\cos(\theta))+\frac{1}{2}m.a^2.\dot \theta^2$, where $\theta_0$ is the maximum angular amplitude. Rearranging: $\dot \theta ^2 = \frac{2g}{a} (\cos (\theta)-\cos (\theta_0))$. Which is a DE of separable type which may be integrated to give: $\pm \int_{\theta_0}^{\phi} \frac{1}{(\cos (\theta)-\cos (\theta_0))^{\frac{1}{2}}}d\theta=t_{\phi}\sqrt{ \frac{2g}{a} }$, where $t_{\phi}$ is the time that the bob is a angle $\phi$ (after having been released from $\theta=\theta_0$ at $t=0$). Now consider just the first swing the we want the signs in front of the integral to be -ve, and the bob reaches the bottom of its swing when $\phi=0$, so the time we are looking for is: $\tau=\sqrt{ \frac{a}{2g} } \int_{0}^{\theta_0} \frac{1}{(\cos (\theta)-\cos (\theta_0))^{\frac{1}{2}}}d\theta$. Now I think that the integral on the RHS diverges as $\theta_0 \rightarrow \pi$. RonL (apologies in advance for any errors that may have crept into my LaTeX). Attached Thumbnails 9. Thank you
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338138699531555, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/tagged/zero-knowledge-proofs?sort=frequent&pagesize=30
# Tagged Questions "Zero Knowledge Proof" is an interactive method for one party to prove to another that a statement is true, without revealing anything other than the veracity of the statement. learn more… | top users | synonyms 2answers 287 views ### Proof that lottery does not know outcome of draw Could a variable participant lottery system cryptographically prove that they have zero knowledge of the outcome of a draw? Participants do not choose numbers in this lottery and winning numbers are ... 3answers 208 views ### Is there a public key semantically secure cryptosystem for which one can prove in zero knowledge the equivalence of two plaintexts? If Alice encrypts two messages $a$ and $b$, such that $x=E(a)$, $y=E(b)$. Can Alice prove (without revealing $a$, $b$ or the private key) that $a = b$? Obviously the proof must not be too long and it ... 1answer 211 views ### Why does SRP-6a use k = H(N, g) instead of the k = 3 in SRP-6? I've been reading up on the Secure Remote Pasword protocol (SRP). There are a couple different versions of the protocol (the original published version being designated SRP-3, with two subsequent ... 0answers 163 views ### Is there a practical zero-knowledge proof for this special discrete log equation? We have a multiplicative cyclic group $G$ with generators $g$ and $h$, as in El Gamal. Assume $G$ is a subgroup of $(\mathbb{Z}/n\mathbb{Z})^*$. There are two parties, Alice and Bob: Alice knows: ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178221225738525, "perplexity_flag": "middle"}